Skip to main content

Multiscale analysis of slow-fast neuronal learning models with noise

Abstract

This paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three time-scales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this time-scale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or anti-symmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire ‘knowledge’ of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity post-learning encodes the correlation structure of the inputs, whereas the anti-symmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the time-scales ratio appears as an important parameter revealing temporal correlations.

1 Introduction

Complex systems are made of a large number of interacting elements leading to non-trivial behaviors. They arise in various areas of research such as biology, social sciences, physics or communication networks. In particular in neuroscience, the nervous system is composed of billions of interconnected neurons interacting with their environment. Two specific features of this class of complex systems are that (i) external inputs and (ii) internal sources of random fluctuations influence their dynamics. Their theoretical understanding is a great challenge and involves high-dimensional non-linear mathematical models integrating non-autonomous and stochastic perturbations.

Modeling these systems gives rise to many different scales both in space and in time. In particular, learning processes in the brain involve three time-scales: from neuronal activity (fast), external stimulation (intermediate) to synaptic plasticity (slow). Here, fast time-scale corresponds to a few milliseconds and slow time-scale to minutes/hour, and intermediate time-scale generally ranges between fast and slow scales, although some stimuli may be faster than neuronal activity time-scale (e.g., submilliseconds auditory signals [1]). The separation of these time-scales is an important and useful property in their study. Indeed, multiscale methods appear particularly relevant to handle and simplify such complex systems.

First, stochastic averaging principle [2, 3] is a powerful tool to analyze the impact of noise on slow-fast dynamical systems. This method relies on approximating the fast dynamics by its quasi-stationary measure and averaging the slow evolution with respect to this measure. In the asymptotic regime of perfect time-scale separation, this leads to a slow reduced system whose analysis enables a better understanding of the original stochastic model.

Second, periodic averaging theory [4], which has been originally developed for celestial mechanics, is particularly relevant to study the effect of fast deterministic and periodic perturbations (external input) on dynamical systems. This method also leads to a reduced model where the external perturbation is time-averaged.

It seems appropriate to gather these two methods to address our case of a noisy and input-driven slow-fast dynamical system. This combined approach provides a novel way to understand the interactions between the three time-scales relevant in our models. More precisely, we will consider the following class of multiscale stochastic differential equations (SDEs), with ϵ 1 , ϵ 2 >0 two small parameters

{ d v ϵ = 1 ϵ 1 [ F ( v ϵ , w ϵ , u ( t ϵ 2 ) ) ] d t + 1 ϵ 1 Σ d B ( t ) , d w ϵ = G ( v ϵ , w ϵ ) d t ,
(1)

where v ϵ R p represents the fast activity of the individual elements, w ϵ R q represents the connectivity weights that vary slowly due to plasticity, and u(t) R p represents the value of the external input at time t. Random perturbations are included in the form of a diffusion term, and (B(t)) is a standard Brownian motion.

We are interested in the double limit ϵ 1 0 and ϵ 2 0 to describe the evolution of the slow variable w in the asymptotic regime where both the variable v and the external input are much faster than w. This asymptotic regime corresponds to the study of a neuronal network in which both the external input u and the neuronal activity v operate on a faster time-scale than the slow plasticity-driven evolution of synaptic weights W. To account for the possible difference of time-scales between v and the input, we introduce the time-scale ratio μ= ϵ 1 / ϵ 2 [0,]. In the interesting case where μ(0,), one needs to understand the long-time behavior of the rescaled periodically forced SDE for any w 0 fixed

dv=F(v, w 0 ,μt)dt+Σ(v, w 0 )dB(t).

Recently, in an important contribution [5], a precise understanding of the long-time behavior of such processes has been obtained using methods from partial differential equations. In particular, conditions ensuring the existence of a periodic family of probability measures to which the law of v converges as time grows have been identified, together with a sharp estimation of the speed of mixing. These results are at the heart of the extension of the classical stochastic averaging principle [2] to the case of periodically forced slow-fast SDEs [6]. As a result, we obtain a reduced equation describing the slow evolution of variable w in the form of an ordinary differential equation,

d w d t = G ¯ (w),

where G ¯ is constructed as an average of G with respect to a specific probability measure, as explained in Section 2.

This paper first introduces the appropriate mathematical framework and then focuses on applying these multiscale methods to learning neural networks.

The individual elements of these networks are neurons or populations of neurons. A common assumption at the basis of mathematical neuroscience [7] is to model their behavior by a stochastic differential equation which is made of four different contributions: (i) an intrinsic dynamics term, (ii) a communication term, (iii) a term for the external input, and (iv) a stochastic term for the intrinsic variability. Assuming that their activity is represented by the fast variable v R n , the first equation of system (1) is a generic representation of a neural network (function F corresponds to the first three terms contributing to the dynamics). In the literature, the level of non-linearity of the function F ranges from a linear (or almost-linear) system to spiking neuron dynamics [8], yet the structure of the system is universal.

These neurons are interconnected through a connectivity matrix which represents the strength of the synapses connecting the real neurons together. The slow modification of the connectivity between the neurons is commonly thought to be the essence of learning. Unsupervised learning rules update the connectivity exclusively based on the value of the activity variable. Therefore, this mechanism is represented by the slow equation above, where w R n × n is the connectivity matrix and G is the learning rule. Probably the most famous of these rules is the Hebbian learning rule introduced in [9]. It says that if both neurons A and B are active at the same time, then the synapses from A to B and B to A should be strengthened proportionally to the product of the activity of A and B. There are many different variations of this correlation-based principle which can be found in [10, 11]. Another recent, unsupervised, biologically motivated learning rule is the spike-timing-dependent plasticity (STDP) reviewed in [12]. It is similar to Hebbian learning except that it focuses on causation instead of correlation and that it occurs on a faster time-scale. Both of these types of rule correspond to G being quadratic in v.

Previous literature about dynamic learning networks is thick, yet we take a significantly different approach to understand the problem. An historical focus was the understanding of feedforward deterministic networks [1315]. Another approach consisted in precomputing the connectivity of a recurrent network according to the principles underlying the Hebbian rule [16]. Actually, most of current research in the field is focused on STDP and is based on the precise times of the spikes, making them explicit in computations [1720]. Our approach is different from the others regarding at least one of the following points: (i) we consider recurrent networks, (ii) we study the evolution of the coupled system activity/connectivity, and (iii) we consider bounded dynamical systems for the activity without asking them to be spiking. Besides, our approach is a rigorous mathematical analysis in a field where most results rely heavily on heuristic arguments and numerical simulations. To our knowledge, this is the first time such models expressed in a slow-fast SDE formalism are analyzed using temporal averaging principles.

The purpose of this application is to understand what the network learns from the exposition to time-dependent inputs. In other words, we are interested in the evolution of the connectivity variable, which evolves on a slow time-scale, under the influence of the external input and some noise added on the fast variable. More precisely, we intend to explicitly compute the equilibrium connectivities of such systems. This final matrix corresponds to the knowledge the network has extracted from the inputs. Although the derivation of the results is mathematically tough for untrained readers, we have tried to extract widely understandable conclusions from our mathematical results and we believe this paper brings novel elements to the debate about the role and mechanisms of learning in large scale networks.

Although the averaging method is a generic principle, we have made significant assumptions to keep the analysis of the averaged system mathematically tractable. In particular, we will assume that the activity evolves according to a linear stochastic differential equation. This is not very realistic when modeling individual neurons, but it seems more reasonable to model populations of neurons; see Chapter 11 of [7].

The paper is organized as follows. Section 2 is devoted to introducing the temporal averaging theory. Theorem 2.2 is the main result of this section. It provides the technical tool to tackle learning neural networks. Section 3 corresponds to application of the mathematical tools developed in the previous section onto the models of learning neural networks. A generic model is described and three different particular models of increasing complexity are analyzed. First, Hebbian learning, then trace-learning, and finally STDP learning are analyzed for linear activities. Finally, Section 4 is a discussion of the consequences of the previous results from the viewpoint of their biological interpretation.

2 Averaging principles: theory

In this section, we present multiscale theoretical results concerning stochastic averaging of periodically forced SDEs (Section 2.3). These results combine ideas from singular perturbations, classical periodic averaging and stochastic averaging principles. Therefore, we recall briefly, in Sections 2.1 and 2.2, several basic features of these principles, providing several examples that are closely related to the application developed in Section 3.

2.1 Periodic averaging principle

We present here an example of a slow-fast ordinary differential equation perturbed by a fast external periodic input. We have chosen this example since it readily illustrates many ideas that will be developed in the following sections. In particular, this example shows how the ratio between the time-scale separation of the system and the time-scale of the input appears as a new crucial parameter.

Example 2.1 Consider the following linear time-inhomogeneous dynamical system with ϵ 1 , ϵ 2 >0 two parameters:

d v ϵ d t = 1 ϵ 1 ( v ϵ + sin ( t ϵ 2 ) ) , d w ϵ d t = w ϵ + v ϵ 2 .

This system is particularly handy since one can solve analytically the first ordinary differential equation, that is,

v(t)= 1 1 + μ 2 ( sin ( t ϵ 2 ) μ cos ( t ϵ 2 ) ) + v 0 e t ϵ 1 ,

where we have introduced the time-scales ratio

μ:= ϵ 1 ϵ 2 .

In this system, one can distinguish various asymptotic regimes when ϵ 1 and ϵ 2 are small according to the asymptotic value of μ:

  • Regime 1: Slow input μ=0:

First, if ϵ 1 0 and ϵ 2 is fixed, then v(t) is close to sin( t ϵ 2 ), and from geometric singular perturbation theory [21, 22] one can approximate the slow variable w ϵ by the solution of

d w d t =w+ ( sin ( t ϵ 2 ) ) 2 .

Now taking the limit ϵ 2 0 and applying the classical averaging principle [4] for periodically driven differential equations, one can approximate w ϵ by the solution of

d w d t =w+ 1 2 ,

since 1 2 π 0 2 π sin ( s ) 2 ds= 1 2 .

  • Regime 2: Fast input μ=:

If ϵ 2 0 and ϵ 1 is fixed, then the classical averaging principle implies that v ϵ is close to the solution of

d v d t = v ϵ 1 ,

so that w ϵ can be approximated by

d w d t =w+ ( v 0 e t / ϵ 1 ) 2 ,

and when ϵ 1 0, one does not recover the same asymptotic behavior as in Regime 1.

  • Regime 3: Time-scales matching 0<μ<:

Now consider the intermediate case where ϵ 1 is asymptotically proportional to ϵ 2 . In this case, v ϵ can be approximated on the fast time-scale t/ ϵ 1 by the periodic solution v ¯ μ (t)= 1 1 + μ 2 (sin(μt)μcos(μt)) of d v d t =v+sin(μt). As a consequence, w ϵ will be close to the solution of

d w d t =w+ 1 2 ( 1 + μ 2 ) ,

since 1 2 π 0 2 π v ¯ μ ( t / μ ) 2 dt= 1 2 ( 1 + μ 2 ) .

Thus, we have seen in this example that

  1. 1.

    the two limits ϵ 1 0 and ϵ 2 0 do not commute,

  2. 2.

    the ratio μ between the internal time-scale separation ϵ 1 and the input time-scale ϵ 2 is a key parameter in the study of slow-fast systems subject to a time-dependent perturbation.

2.2 Stochastic averaging principle

Time-scales separation is a key property to investigate the dynamical behavior of non-linear multiscale systems, with techniques ranging from averaging principles to geometric singular perturbation theory. This property appears to be also crucial to understanding the impact of noise. Instead of carrying a small noise analysis, a multiscale approach based on the stochastic averaging principle [2] can be a powerful tool to unravel subtle interplays between noise properties and non-linearities. More precisely, consider a system of SDEs in R p + q :

d v t ϵ = 1 ϵ F ( v t ϵ , w t ϵ ) d t + 1 ϵ Σ ( v t ϵ , w t ϵ ) d B ( t ) , d w t ϵ = G ( v t ϵ , w t ϵ ) d t ,

with initial conditions v ϵ (0)= v 0 , w ϵ (0)= w 0 , and where w ϵ R q is called the slow variable, v ϵ R p is the fast variable, with F, G, Σ smooth functions ensuring the existence and uniqueness for the solution ( v ϵ , w ϵ ), and B(t) a p-dimensional standard Brownian motion, defined on a filtered probability space (Ω,F,P). Time-scale separation in encoded in the small parameter ϵ, which denotes in this section a single positive real number.

In order to approximate the behavior of ( v ϵ , w ϵ ) for small ϵ, the idea is to average out the equation for the slow variable with respect to the stationary distribution of the fast one. More precisely, one first assumes that for each w R q fixed, the frozen fast SDE,

d v t =F( v t ,w)dt+Σ( v t ,w)dB(t),

admits a unique invariant measure, denoted ρ w (dv). Then, one defines the averaged drift vector field G ¯

G ¯ (w):= R m G(v,w) ρ w (dv)
(2)

and w the solution of d w d t = G ¯ (w) with the initial condition w(0)= y 0 . Under some dissipativity assumptions, the stochastic averaging principle [2] states:

Theorem 2.1 For any δ>0 and T>0,

lim ϵ 0 P [ sup t [ 0 , T ] w t ϵ w t 2 > δ ] =0.
(3)

As a consequence, analyzing the behavior of the deterministic solution w can help to understand useful features of the stochastic process ( v ϵ , w ϵ ).

Example 2.2 In this example we consider a similar system as in Example 2.1, but with a noise term instead of the periodic perturbation. Namely, we consider ( v ϵ , w ϵ ) the solution of the system of SDEs,

d v ϵ = 1 ϵ v ϵ d t + σ ϵ d B ( t ) , d w ϵ = ( w ϵ + ( v ϵ ) 2 ) d t ,

with ϵ>0 a small parameter and σ>0 a positive constant. From Theorem 2.1, the stochastic slow variable w ϵ can be approximated in the sense of (3) by the deterministic solution w of

d w d t = v R ( w + v 2 ) ρ(dv),

where ρ(dv) is the stationary measure of the linear diffusion process,

dv=vdt+σdB(t),

that is,

ρ(dv)= 1 σ π e v 2 σ 2 .

Consequently, w ϵ can be approximated in the limit ϵ0 by the solution of

d w d t =w+ σ 2 2 .

Applying (3) leads to the following result: for any T>0 and δ>0,

lim ϵ 0 P [ sup t [ 0 , T ] | w t ϵ ( y 0 σ 2 2 ) e t + σ 2 2 | 2 > δ ] =0.

Interestingly, the asymptotic behavior of w ϵ for small ϵ is characterized by a deterministic trajectory that depends on the strength σ of the noise applied to the system. Thus, the stochastic averaging principle appears particularly interesting when unraveling the impact of noise strength on slow-fast systems.

Many other results have been developed since, extending the set-up to the case where the slow variable has a diffusion component or to infinite-dimensional settings for instance, and also refining the convergence study, providing homogenization results concerning the limit of ϵ 1 / 2 ( w ϵ w) or establishing large deviation principles (see [23] for a recent monograph). However, fewer results are available in the case of non-homogeneous SDEs, that is, when the system is perturbed by an external time-dependent signal. This setting is of particular interest in the framework of stochastic learning models, and we present the main relevant mathematical results in the following section.

2.3 Double averaging principle

Combining ideas of periodic and stochastic averaging introduced previously, we present here theoretical results concerning multiscale SDEs driven by an external time-periodic input. Consider ( v ϵ , w ϵ ) the solution of

d v ϵ = 1 ϵ 1 [ F ( v ϵ , w ϵ , t ϵ 2 ) ] d t + 1 ϵ 1 Σ ( v ϵ , w ϵ ) d B ( t ) , d w ϵ = G ( v ϵ , w ϵ ) d t ,
(4)

with tF(v,w,t) R p a τ-periodic function and ϵ=( ϵ 1 , ϵ 2 ) R + 2 . Parameter ϵ 1 represents the internal time-scale separation and ϵ 2 the input time-scale. We consider the case where both ϵ 1 and ϵ 2 are small, that is, a strong time-scale separation between the fast variable v ϵ R p and the slow one w ϵ R q , and a fast periodic modulation of the fast drift F(v,w,).

We further denote z=(v,w).

Definition 2.1 We define the asymptotic time-scale ratio

μ:= lim | ϵ | 0 ϵ 1 ϵ 2 .
(5)

Accordingly, we denote lim | ϵ | 0 μ the distinguished limit when ϵ 1 0, ϵ 2 0 with ϵ 1 / ϵ 2 μ.

The following assumption is made to ensure existence and uniqueness of a strong solution to system (4). In the following, z 1 , z 2 will denote the usual scalar product for vectors.

Assumption 2.1 Existence and uniqueness of a strong solution

  1. (i)

    The functions F, G, and Σ are locally Lipschitz continuous in the space variable z. More precisely, for any R>0, there exists a constant α R such that

    F ( z ) F ( z ) α R z z for any z, z R p + q  with zR and  z R.
  2. (ii)

    There exists a constant R>0 such that

    sup z > R , t > 0 ( F ( z , t ) , G ( z ) ) , z z 2 <0.

To control the asymptotic behavior of the fast variable, one further assumes the following.

Assumption 2.2 Asymptotic behavior of the fast process:

  1. (i)

    The diffusion matrix Σ is bounded

    M Σ >0s.t. z, Σ ( z ) < M Σ

and uniformly non-degenerate

η 0 >0s.t. v,z Σ ( z ) Σ ( z ) v , v η 0 v 2 .
  1. (ii)

    There exists r 0 <0 such that for all t0 and for all z,x R p + q ,

    z F ( z , t ) x , x r 0 x 2 .

According to the value of μ{0, R + ,}, the stochastic averaging principle is based on a description of the asymptotic behavior of various rescaled fast frozen processes. More precisely, under Assumptions 2.1 and 2.2, one can deduce that:

  • For any fixed w 0 R q and t 0 >0 fixed, the law of the rescaled time-homogeneous frozen process,

    dv=F(v, w 0 , t 0 )dt+Σ(v, w 0 )dB(t),

converges exponentially fast to a unique invariant probability measure denoted by ρ w 0 , t 0 (dv).

  • For any fixed w 0 R q , there exists a τ μ -periodic evolution system of measures ν μ w 0 (t,dv), different from ρ w 0 , t (dv) above, such that the law of the rescaled time-inhomogeneous frozen process,

    dv=F(v, w 0 ,μt)dt+Σ(v, w 0 )dB(t),
    (6)

converges exponentially fast towards ν μ w 0 (t,), uniformly with respect to w 0 (cf. the Appendix Theorem A.1).

  • For any fixed w 0 R q , the law of the rescaled time-homogeneous frozen process,

    dv= F ¯ (v, w 0 )dt+Σ(v, w 0 )dB(t),

where F ¯ (v, w 0 ):= τ 1 0 τ F(v, w 0 ,t)dt, converges exponentially fast towards a unique invariant probability measure denoted by ρ ¯ w 0 (dv).

According to the value of μ, we introduce a vector field G ¯ μ which will play a role similar to G ¯ introduced in equation (2).

Definition 2.2 We define G ¯ μ : R q R q as follows. In the time-scale matching case, that is, when 0<μ<, then

G ¯ μ (w):= ( τ μ ) 1 0 τ μ v R p G(v,w) ν μ w (t,dv)dt.
(7)

Notation We may denote the periodic system of measures ν μ w (t,dv) associated with (6) by ν μ w [F,Σ](t,dv) to emphasize its relationship with F and Σ. Accordingly, we may denote G ¯ μ (w) by G ¯ μ [ F , Σ ] (w).

We are now able to present our main mathematical result. Extending Theorem 2.1, the following theorem describes the asymptotic behavior of the slow variable w ϵ when ϵ0 with ϵ 1 / ϵ 2 μ. We refer to [6] for more details about the full mathematical proof of this result.

Theorem 2.2 Let μ(0,). If w is the solution of

d w d t = G ¯ μ (w)with w(0)= w ϵ (0),
(8)

then the following convergence result holds, for all T>0 and δ>0:

lim | ϵ | 0 μ P [ sup t [ 0 , T ] | w t ϵ w t | 2 > δ ] =0.

Remark 2.1

  1. 1.

    The extremal cases μ=0 and μ= are not covered in full rigor by Theorem 2.2. However, the study of the sequential limits ϵ 1 0 followed by ϵ 2 0 or ϵ 2 0 followed by ϵ 1 0 can be deduced from an appropriate combination of classical periodic and stochastic averaging theorems:

  • Slow input: If one considers the case where the limit ϵ 1 0 is taken first, so that from Theorem 2.1 with fast variable v ϵ and slow variables w ϵ and t (with the trivial equation t ˙ =1), w ϵ is close in probability on finite time-intervals to the solution of the following inhomogeneous ordinary differential equation:

    d w ˜ d t = v R p G(v, w ˜ ) ρ w ˜ , t / ϵ 2 (dv):= G ˜ ( w ˜ ,t/ ϵ 2 ).

Then taking the limit ϵ 2 0, one can apply the deterministic averaging principle to the fast periodic vector field G ˜ (w,t/ ϵ 2 ), so that w ˜ converges when ϵ 2 0 to the solution of

d w d t = τ 1 0 τ G ˜ (v,w)dt= G ¯ 0 (w),

where

G ¯ 0 (w):= τ 1 0 τ v R p G(v,w) ρ w , t (dv)dt.
  • Fast input: If the limit ϵ 2 0 is taken first, one first has to perform a classical averaging of the periodic drift F(v,w,t/ ϵ 2 ) leading to the homogeneous system of SDEs (4), but with F ¯ (v,w) instead of F(v,w,t/ ϵ 2 ). Then, an application of Theorem 2.1 on this system gives an averaged vector field

    G ¯ (w):= v R p G(v,w) ρ ¯ w (dv).
  1. 2.

    To study the extremal cases μ=0 and μ= in full generality, one would need to consider all the possible relationships between ϵ 1 and ϵ 2 , not only the linear one as in the present article, but also of the type ϵ 1 = ϵ 2 α for example. In this case, we believe that the regime α<1 converges to the same limit as taking the limit ϵ 2 first and the regime α>1 corresponds to taking the limit ϵ 1 first. The intermediate regime α=1 seems to be the only one for which the limit cannot be obtained by combining classical averaging principles. Therefore, the present article is focused on this case, in which the averaged system depends explicitly on the scaling parameter μ. Moreover, in terms of applications, this parameter can have a relatively easy interpretation in terms of the ratio of time-scales between intrinsic neuronal activity and typical stimulus time-scales in a given situation. Although the zeroth order limit (i.e., the averaged system) seems to depend only on the position of α with respect to 1, it seems reasonable to expect that the fluctuations around the limit would depend on the precise value of α. This is a difficult question which may deserve further analysis.

The case 0<μ< is already very rich in the sense that it combines simultaneously both the periodic and stochastic averaging principles in a new way that cannot be recovered by sequential applications of those principles. A particular role is played by the frozen periodically-forced SDE (6). The equivalent of the quasi-stationary measure ρ w of Theorem 2.1 is given by the asymptotically periodic behavior of equation (6), represented by the periodic family of measures ν μ w (t,dv).

  1. 3.

    By a rescaling of the frozen process (6), one deduces the following scaling relationships:

    ν μ w [F,Σ](t,dv)= ν 1 w [ F μ , Σ μ ] (μt,dv)

and

G ¯ μ [ F , Σ ] (w)= G ¯ 1 [ F μ , Σ μ ] (w).

Therefore, if one knows, in the case μ=1, the averaged vector field associated with the fast process generated by a drift F and a diffusion coefficient σ, denoted G ¯ 1 [F,Σ], it is possible to deduce G ¯ μ in the general case μ(0,) with a change FμF and Σ μ Σ.

  1. 4.

    It seems reasonable to expect that the above result is still valid when considering ergodic, but not necessarily periodic, time dependency of the function F(v,w,). In equation (7), instead of integrating ν μ w (t,dv) over one period, one should integrate it with respect to an ergodic stationary measure. However, this extension requires non-trivial technical improvements of [5] which are beyond the scope of this paper.

2.3.1 Case of a fast linear SDE with periodic input

We present here an elementary case where one can compute explicitly the quasi-stationary time-periodic family of measures ν μ w (t,x), when the equation for the fast variable is linear. Namely, we consider v R p the solution of

dv(t)= ( A v ( t ) + u ( μ t ) ) dt+ΣdB(t),

with initial condition v(0)= v 0 R p , and where A R p × p is a matrix whose eigenvalues have positive real parts and u() is a τ-periodic function.

We are interested in the large time behavior of the law of v(t), which is a time-inhomogeneous Ornstein-Uhlenbeck process. From [5] we know that its law converges to a τ-periodic family of probability measures ν(t,dv). Due to the linearity in the previous equation, ν(t,dv) is Gaussian with a time-dependent mean and a constant covariance matrix

ν(t,dv)= N v ¯ ( t ) , Q (dv),

where v ¯ is the τ μ -periodic attractor of d v ¯ d t =A v ¯ (t)+u(μt), i.e.,

v ¯ (t)= t e A ( t s ) u(μs)ds,

and Q is the unique solution of the Lyapunov equation

AQ+Q A +Σ Σ =0.
(9)

Indeed, if one denotes c(t)=v(t) v ¯ (t), then c(t) is a solution of the classical homogeneous Ornstein-Uhlenbeck equation

dc(t)=Ac(t)dt+ΣdB(t),

whose stationary distribution is known to be a centered Gaussian measure with the covariance matrix Q solution of (9); see Chapter 3.2 of [24]. Notice that if A is self-adjoint with respect to ( Σ Σ ) 1 (i.e., A(Σ Σ )=(Σ Σ ) A ), then the solution is Q= A 1 ( Σ Σ ) 2 = ( Σ Σ ) A 1 2 , which will be used in Appendix B.2.

Hence, in the linear case, the averaged vector field of equation (7) becomes

G ¯ μ (w):= ( τ μ ) 1 0 τ μ v R p G ( v ¯ ( t ) + v , w ) N 0 , Q (dv)dt,
(10)

where N x , Q is the probability density function of the Gaussian law with mean x R q and covariance Q R p × p .

Therefore, due to the linearity of the fast SDE, the periodic system of measure ν is just a constant Gaussian distribution shifted by a periodic function of time v(t). In case G is quadratic in v, this remark implies that one can perform independently the integral over time and over R p in formula (10) (noting that the crossed term has a zero average). In this case, contributions from the periodic input and from noise appear in the averaged vector field in an additive way.

Example 2.3 In this last example, we consider a combination between Example 2.1 and Example 2.2, namely we consider the following system of periodically forced SDEs:

d v ϵ = 1 ϵ 1 [ v ϵ + sin ( t ϵ 2 ) ] d t + σ ϵ 1 d B ( t ) , d w ϵ = ( w ϵ + ( v ϵ ) 2 ) d t .

As in Example 2.1 and as shown above, the behavior of this system when both ϵ 1 and ϵ 2 are small depends on the parameter μ defined in (5). More precisely, we have the following three regimes:

  • Regime 1: slow input:

    G ¯ 0 (w)=w+ σ 2 2 + 1 2 .
  • Regime 2: fast input:

    G ¯ (w)=w+ σ 2 2 .
  • Regime 3: time-scale matching:

    G ¯ μ (w)=w+ σ 2 2 + 1 2 ( 1 + μ 2 ) .

2.4 Truncation and asymptotic well-posedness

In some cases, Assumptions 2.1-2.2 may not be satisfied on the entire phase space R p × R q , but only on a subset. Such situations will appear in Section 3 when considering learning models. We introduce here a more refined set of assumptions ensuring that Theorem 2.2 still applies.

Let us start with an example, namely the following bi-dimensional system with white noise input:

{ d v ϵ = 1 ϵ ( l v ϵ + w ϵ v ϵ ) d t + σ ϵ d B ( t ) , d w ϵ = ( κ w ϵ + ( v ϵ ) 2 ) d t ,
(11)

with ϵ>0, σ>0, l>0, μ>0.

For the fast drift (lw)v to be non-explosive, it is necessary to have w<lα with α>0 for all time. The concern about this system comes from the fact that the slow variable w may reach l due to the fluctuations captured in the term v 2 , for instance, if κ is not large enough. Such a system may have exponentially growing trajectories. However, we claim that for small enough ϵ, w ϵ will remain close to its averaged limit w for a very long time, and if this limit remains below lα, then w ϵ can be considered as well-posed in the asymptotic limit ϵ0. To make this argument more rigorous, we suggest the following definition.

Definition 2.3 A stochastic differential equation with a given initial condition is asymptotically well posed in probability if for the given initial condition,

  1. 1.

    a unique solution exists until a random time τ ϵ

  2. 2.

    for all T>0,

    lim ϵ 0 P[ τ ϵ T]=1.

We give in the following proposition sufficient conditions for system (4) to be asymptotically well posed in probability and to satisfy conclusions of Theorem 2.2.

Let us introduce the following set of additional assumptions.

Assumption 2.3 Moment conditions:

  1. (i)

    There exists p>2 such that

    for any T>0, sup ϵ E [ sup 0 t T v t ϵ p + w t ϵ p ] <.
  2. (ii)

    For any T>0 and any bounded subset K of R q ,

    sup ϵ 1 > 0 , ϵ 2 > 0 , w K E [ sup 0 t T G ( v t ϵ , w ) 2 ] <.

Remark 2.2 This last set of assumptions will be satisfied in all the applications of Section 3 since we consider linear models with additive noise for the equation of v, implying this variable to be Gaussian and the function G only involves quadratic moments of v; therefore, the moment conditions (i) and (ii) will be satisfied without any difficulty. Moreover, if one considers non-linear models for the variable v, then the Gaussian property may be lost; however, adding sigmoidal non-linearity has, in general, the effect of bounding the dynamics, thus making these moment assumptions reasonable to check in most models of interest.

Property 2.3 If there exists a subset of R q such that

  1. 1.

    The functions F, G, Σ satisfy Assumptions  2.1-2.3 restricted on R p ×E.

  2. 2.

    is invariant under the flow of G ¯ μ , as defined in (7).

Then for any initial condition w 0 E, system (4) is asymptotically well posed in probability and w ϵ satisfies the conclusion of Theorem  2.2.

Proof See Appendix A.2. □

Here, we show that it applies to system (11). First, with E α ={wR,w<lα}, for some α]0,l[, it is possible to show that Assumptions 2.1-2.2 are satisfied on R p × E α . Then, as a special case of (10), we obtain the following averaged system:

d w d t =κw+ σ 2 2 ( l w ) := G ¯ (w).

It remains to check that the solution of this system satisfies

α>0,such that w(0)<lαt>0,w(t)<lα,

that is, the subset E α is invariant under the flow of G ¯ .

This property is satisfied as soon as

η:= 2 σ 2 κ l 2 <1.

Indeed, one can show that G ¯ (w)=0 admits two solutions iff η<1,

w ± = l 2 (1± 1 η )(0,l),

and that w is stable whereas w + is unstable. Thus, if w(0)<lα with α=l w + >0, then w(t)<lα for all t>0. In fact, the invariance property is true for all α]l w ,l w + [.

3 Averaging learning neural networks

In this section, we apply the temporal averaging methods derived in Section 2 on models of unsupervised learning neural networks. First, we design a generic learning model and show that one can define formally an averaged system with equation (7). However, going beyond the mere definition of the averaged system seems very difficult and we only manage to get explicit results for simple systems where the fast activity dynamics is linear. In the three last subsections, we push the analysis for three examples of increasing complexity.

In the following, we always consider that the initial connectivity is 0. This is an arbitrary choice but without consequences, because we focus on the regime where there is a single globally stable equilibrium point (see Section 3.2.3).

3.1 A generic learning neural network

We now introduce a large class of stochastic neuronal networks with learning models. They are defined as coupled systems describing the simultaneous evolution of the activity of nN neurons and the connectivity between them. We define v R n , the activity field of the network, and W R n × n , the connectivity matrix.

Each neuron variable v i is assumed to follow the SDE

d v i = ( f i ( v i ) + u i ) dt+Σd B i (t),

where the function f i characterizes the intrinsic non-linear dynamical behavior of neuron i and u i is the input received by neuron i. The stochastic term Σd B i (t) is added to account for internal sources of noise. In terms of notations, ( B ( t ) ) t 0 is a standard n-dimensional Brownian motion, Σ is an n×n matrix, possibly function of v or other variables, and Σd B i (t) denotes the i th component of the vector ΣdB(t).

The input u i to neuron i has mainly two components: the external input u i ext and the input coming from other neurons in the network u i syn . The latter is a priori a complex combination of post-synaptic potentials coming from many other neurons. The coefficient W i j of the connectivity matrix accounts for the strength of a synapse ji. Note that neurons can be connected to themselves, i.e., W i i is not necessarily null. Thus, we can write

u i syn :=S ( j = 1 n W i j H ( v i , v j ) ) ,

where S:RR and is a function taking the history of v i and v j and returning a real for each time t (to take convolutions into account). In practical cases, they are often taken to be sigmoidal functions. We abusively redefine S and as vector valued operators corresponding to the element-wise application of their real counterparts. We also define the function F: R n R n such that F ( v ) i = f i ( v i ). Together with a slow generic learning rule, this leads to defining a stochastic learning model as the following system of SDEs.

Definition 3.1

{ d v ϵ = 1 ϵ [ F ( v ϵ ) + S ( W ϵ H ( v ϵ ) ) + u ext ( t ) ] d t + 1 ϵ Σ ( v ϵ , W ϵ ) d B ( t ) , d W ϵ = G ( W ϵ , v ϵ ) d t .

Before applying the general theory of Section 2, let us make several comments about this generic model of neural network with learning. This model is a non-autonomous, stochastic, non-linear slow-fast system.

In order to apply Theorem 2.2, one needs Assumptions 2.1, 2.2, and 2.3 to be satisfied, restricting the space of possible functions S, , , Σ, and G. In particular, Assumption 2.2(ii) seems rather restrictive since it excludes systems with multiple equilibria and suggests that the general theory is more suited to deal with rate-based networks. However, one should keep in mind that these assumptions are only sufficient, and that the double averaging principle may work as well in systems which do not satisfy readily those assumptions.

As we will show in Section 3.3, a particular form of history-dependence can be taken into account, to a certain extent. Indeed, for instance, if the function is actually a functional of the past trajectory of variable v ϵ which can be expressed as the solution of an additional SDE, then it may be possible to include a certain form of history-dependence. However, purely time-delayed systems do not enter the scope of this theory, although it might be possible to derive an analogous averaging method in this framework.

The noise term can be purely additive or set by a particular function Σ(v,W) as long as it satisfies Assumption 2.2(i), meaning that it must be uniformly non-degenerate.

In the following subsection, we apply the averaging theory to various combinations of neuronal network models, embodied by choices of functions S, , , Σ, and various learning rules, embodied by a choice of the function G. We will also analyze the obtained averaged system, describing the slow dynamics of the connectivity matrix in the limit of perfect time-scale separation and, in particular, study the convergence of this averaged system to an equilibrium point.

3.2 Symmetric Hebbian learning

One of the simplest, yet non-trivial, stochastic learning models is obtained when considering

  • A linear model for neuronal activity, namely f i ( v i )=l v i with l a positive constant.

  • A linear model for the synaptic transmission, namely S( v i )= v i and H( v i , v j )= v j .

  • A constant diffusion matrix Σ (additive noise) proportional to the identity Σ=σId (spatially uncorrelated noise).

  • A Hebbian learning rule with linear decay, namely G i j (W,v)=κ W i j + v i v j . Actually, it corresponds to the tensor product: { v v } i j = v i v j .

This model can be written as follows:

{ d v ϵ = 1 ϵ 1 ( L v ϵ + W ϵ v ϵ + u ( t ϵ 2 ) ) d t + σ ϵ 1 d B ( t ) , d W ϵ d t = G ( v ϵ , W ϵ ) = κ W ϵ + v ϵ v ϵ ,
(12)

where neurons are assumed to have the same decay constant: L=l I d ; u is a periodic continuous input (it replaces u ext in the previous section); σ, ϵ 1 , ϵ 2 ,κ R + with ϵ 1 , ϵ 2 1 and B(t) is n-dimensional Brownian noise.

The first question that arises is about the well-posedness of the system: What is the definition interval of the solutions of system (12)? Do they explode in finite time? At first sight, it seems there may be a runaway of the solution if the largest real part among the eigenvalues of W grows bigger than l. In fact, it turns out this scenario can be avoided if the following assumption linking the parameters of the system is satisfied.

Assumption 3.1 There exists p]0,1[ such that

( σ 2 l 2 p ( 1 p ) + u m 2 p ( 1 p ) 2 ) <κ l 3 ,

where u m = sup t R + u ( t ) 2 .

It corresponds to making sure the external (i.e., u m ) or internal (i.e., σ) excitations are not too large compared to the decay mechanism (represented by κ and l). Note that if p]0,1[, u m and d are fixed, it is sufficient to increase κ or l for this assumption to be satisfied.

Under this assumption, the space

E p = { W R n × n : W  is symmetric , W 0  and  W < p L }

is invariant by the flow of the averaged system G ¯ , where W0 means W is semi-definite positive and W<pL means pLW is definite positive. Therefore, the averaged system is defined and bounded on R + . The slow/fast system being asymptotically close to the averaged system, it is therefore asymptotically well-defined in probability. This is summarized in the following theorem.

Theorem 3.1 If Assumption  3.1 is verified for p]0,1[, then system (12) is asymptotically well posed in probability and the connectivity matrix W ϵ , the solution of system (12), converges to W in the sense that for all δ,T>0,

lim ϵ 0 μ P [ sup t [ 0 , T ] | W t ϵ W t | 2 > δ ] =0,

where W is the deterministic solution of

d W i j d t = G ¯ ( W ) i j = κ W i j decay + μ τ 0 τ μ v ¯ i ( s ) v ¯ j ( s ) d s correlation + σ 2 2 ( L W ) i j 1 noise ,
(13)

where v ¯ (t) is the τ μ -periodic attractor of d v ¯ d t =(WL) v ¯ +u(μt), where W R n × n is supposed to be fixed.

Proof See Theorem B.1 in Appendix B.2. □

In the following, we focus on the averaged system described by (13). Its right-hand side is made of three terms: a linear and homogeneous decay, a correlation term, and a noise term. The last two terms are made explicit in the following.

3.2.1 Noise term

As seen in Section 2, in the linear case, the noise term Q is the unique solution of the Lyapunov equation (9) with A=WL and Σ=σId. Because the noise is spatially uncorrelated and identical for each neuron and also because the connectivity is symmetric, observe that Q= σ 2 2 ( L W ) 1 is the unique solution of the system.

In more complicated cases, the computation of this term appears to be much more difficult as we will see in Section 3.4.

3.2.2 Correlation term

This term corresponds to the auto-correlation of neuronal activity. It is only implicitly defined; thus, this section is devoted to finding an explicit form depending only on the parameters l, μ, τ, the connectivity W, and the inputs u. Actually, one can perform an expansion of this term with respect to a small parameter corresponding to a weakly connected expansion. Most terms vanish if the connectivity W is small compared to the strength of the intrinsic decaying dynamics of neurons l.

The auto-correlation term of a τ μ -periodic function can be rewritten as

{ v ¯ v ¯ } i j = 0 τ μ v ¯ i (s) v ¯ j (s)ds.

With this notation, it is simple to think of v as a ‘semi-continuous matrix’ of R n × [ 0 , τ μ [ . Hence, the operator ‘’ can be though of as a matrix multiplication. Similarly, the transpose operator turns a matrix v ¯ R n × [ 0 , τ μ [ into a matrix v ¯ R [ 0 , τ μ [ × n . See Appendix B.1 for details about the notations.

It is common knowledge, see [17] for instance, that this term gathers information about the correlation of the inputs. Indeed, if we assume that the input is sufficiently slow, then v ¯ has enough time to converge to u(t) for all t[0,+[. Therefore, in the first order v ¯ (t) ( W L ) 1 u(t). This leads to v ¯ v ¯ ( W L ) 1 u u ( W L ) 1 . In the weakly connected regime, one can assume that WLL leading to v ¯ v ¯ 1 l 2 u u which is the auto-correlation of the inputs.

Actually, without the assumption of a slow input, lagged correlations of the input appear in the averaged system. Before giving the expression of these temporal correlations, we need to introduce some notations. First, define the convolution filter g l / μ :t l μ e l μ t H(t), where H is the Heaviside function. This family of functions is displayed for different values of l μ in Figure 4(a). Note that g l / μ δ 0 when l μ +, where δ 0 is the Dirac distribution centered at the origin. In this asymptotic regime, the convolution filter and its iterates g l / μ g l / μ are equal to the identity.

We also define the filtered correlation of the inputs C k , p R n × n by

C k , q = def 1 u m 2 τ ( u g l / μ ( k + 1 ) ) ( u g l / μ ( q + 1 ) ) ,

where g l / μ ( k + 1 ) = g l / μ g l / μ is the k th convolution of g l / μ with itself and u m = sup t R + u ( t ) 2 . This is the correlation matrix of the inputs filtered by two different functions. It is easy to show that this is similar to computing the cross-correlation of the inputs with the inputs filtered by another function,

C k , q = 1 u m 2 τ ( u ( g l / μ ( k + 1 ) g l / μ ( q + 1 ) ) ) u = 1 u m 2 τ u ( u ( g l / μ ( k + 1 ) g l / μ ( q + 1 ) ) ) ,
(14)

which motivates the definition of the (k,p)-temporal profile g l / μ ( k + 1 ) g l / μ ( q + 1 ) , where ( g l / μ ) ( k ) (t)= ( g l / μ ( k ) ) (t)= g l / μ ( k ) (t). This notation is deliberately similar to that of the transpose operator we use in the proofs. These functions are shown in Figure 1. We have not found a way to make them explicit; therefore, the following remarks are simply based on numerical illustrations. When k=q, the temporal profiles are centered. The larger the difference kq, the larger the center of the bell. The larger the sum k+q, the larger the standard deviation. This motivates the idea that C k , p can be thought of as the kq lagged correlation of the inputs. One can also say that C 10 , 10 is more blurred than C 0 , 0 in the sense that the inputs are temporally integrated over a ‘wider’ window in the first case.

Fig. 1
figure 1

This shows the (k,q)-temporal profiles with l μ =1, i.e., the functions g 1 ( k + 1 ) g 1 ( q + 1 ) for q=0 and k ranging from 0 to 6. For k=q=0, the temporal profile is even and this also occurs to be true for any k=q. When k>q, the function reaches its maximum for strictly positive values that grow with the difference kq. Besides, the temporal profiles are flattened when k+q increases.

Observe that g l / μ ( k + 1 ) (t)= l k + 1 μ k + 1 k ! t k e l μ t H(t). Therefore, g l / μ ( k + 1 ) 1 = Γ ( k + 1 ) k ! =1. Thanks to Young’s inequality for convolutions, which says that u g l / μ ( k ) 2 u 2 g l / μ ( k ) 1 , it can be proved that C k , q 2 1.

We intend to express the correlation term as an infinite converging sum involving these filtered correlations. In this perspective, we use a result we have proved in [25] to write the solution of a general class of non-autonomous linear systems (e.g., d v ¯ d t =(WL) v ¯ +u(t)) as an infinite sum, in the case μ=1.

Lemma 3.2 If v ¯ is the solution, with zero as initial condition, of d v ¯ d t =(WL) v ¯ +u(t) it can be written by the sum below which converges if W is in E p for p]0,1[.

v ¯ = k = 0 + W k l k + 1 u g l ( k + 1 ) ,

where g l :tl e l t H(t).

Proof See Lemma B.2 in Appendix B.2. □

This is a decomposition of the solution of a linear differential system on the basis of operators where the spatial and temporal parts are decoupled. This important step in a detailed study of the averaged equation cannot be achieved easily in models with non-linear activity. Everything is now set up to introduce the explicit expansion of the correlation we are using in what follows. Indeed, we use the previous result to rewrite the correlation term as follows.

Property 3.3 The correlation term can be written

μ τ v ¯ v ¯ = u m 2 l 2 k , q = 0 + W k l k C k , q W q l q .

Proof See Theorem B.3 in Appendix B.2. □

This infinite sum of convolved filters is reminiscent of a property of Hawkes processes that have a linear input-output gain [26].

The speed of inputs characterized by μ only appears in the temporal profiles g l / μ ( k ) g l / μ ( q ) . In particular, if the inputs are much slower than neuronal activity time-scale, i.e., μ=0, then g + = δ 0 and u g + =u. Therefore, C k , q = C 0 , 0 and the sums in the formula of Property 3.3 are separable, leading to v ¯ v ¯ = ( L W ) 1 u u ( L W ) 1 , which corresponds to the heuristic result previously explained.

Therefore, the averaged equation can be explicitly rewritten

d W d t = G ¯ (W)=κW+ u m 2 l 2 k , q = 0 + W k l k C k , q W q l q + σ 2 2 ( L W ) 1 .
(15)

In Figure 2, we illustrate this result by comparing, for different ϵ= ϵ 1 = ϵ 2 (i.e., we choose μ=1 in this example), the stochastic system and its averaged version. The above decomposition has been used as the basis for numerical computation of trajectories of the averaged system.

Fig. 2
figure 2

The first two figures, (a) and (b), represent the evolution of the connectivity for original stochastic system (12), superimposed with averaged system (13), for two different values of ϵ: respectively ϵ=0.01 and ϵ=0.001, where we have chosen ϵ= ϵ 1 = ϵ 2 . Each color corresponds to the weight of an edge in a network made of n=3 neurons. As expected, it seems that the smaller ϵ, the better the approximation. This can be seen in the picture (c) where we have plotted the precision on the y-axis and ϵ on the x-axis. The parameters used here are l=12, μ=1, κ=100, σ=0.05. The inputs have a random (but frozen) spatial structure and evolve according to a sinusoidal function.

3.2.3 Global stability of the equilibrium point

Now that we have found an explicit formulation for the averaged system, it is natural to study its dynamics. Actually, we prove in the following that if the connectivity W is kept smaller than l 3 , i.e., Assumption 3.1 is verified with p 1 3 , then the dynamics is trivial: the system converges to a single equilibrium point. Indeed, under the previous assumption, the system can be written G ¯ (W)=κW+F(W), where F is a contraction operator on E 1 3 . Therefore, one can prove the uniqueness of the fixed point with the Banach fixed point argument and exhibit an energy function for the system.

Theorem 3.4 If Assumption  3.1 is verified for p 1 3 , then there is a unique equilibrium point in the invariant subset E p which is globally, asymptotically stable.

Proof See Theorem B.4 in Appendix B.2. □

The fact that the equilibrium point is unique means that the ‘knowledge’ of the network about its environment (corresponding by hypothesis to the connectivity) eventually is unique. For a given input and any initial condition, the network can only converge to the same ‘knowledge’ or ‘understanding’ of this input.

3.2.4 Explicit expansion of the equilibrium point

When the network is weakly connected, the high-order terms in expansion (15) may be neglected. In this section, we follow this idea and find an explicit expansion for the equilibrium connectivity where the strength of the connectivity is the small parameter enabling the expansion. The weaker the connectivity, the more terms can be neglected in the expansion.

In fact, it is not natural to speak about a weakly connected learning network since the connectivity is a variable. However, we are able to identify a weak connectivity index which controls the strength of the connectivity. We say the connectivity is weak when it is negligible compared to the intrinsic leak term, i.e., W l is small. We show in the Appendix that this weak connectivity index depends only on the parameters of the network and can be written

p ˜ = u m 2 κ l 3 + σ 2 2 κ l 2 .

In the asymptotic regime p ˜ 0, we have W p ˜ l =O(1). This index is the ‘small’ parameter needed to perform the expansion. We also define λ= σ 2 l 2 u m 2 , which has information about the way p ˜ is converging to zero. In fact, it is the ratio of the two terms of p ˜ .

With these, we can prove that the equilibrium connectivity W has the following asymptotic expansion in p ˜ .

Theorem 3.5

W = p ˜ l 1 + λ ( λ + C 0 , 0 ) + p ˜ 2 l ( 1 + λ ) 2 ( λ 2 + λ ( C 0 , 0 + C 1 , 0 + C 0 , 1 ) + C 0 , 0 C 1 , 0 + C 0 , 1 C 0 , 0 ) + O ( p ˜ 3 ) .

Proof See Theorem B.5 in Appendix B.2. □

At the first order, the final connectivity is C 0 , 0 , the filtered correlation of the inputs convolved with a bell-shaped centered temporal profile. In the case of Figure 3, this is quite a good approximation of the final connectivity.

Fig. 3
figure 3

(a) shows the temporal evolution of the input to a n=8 neurons network. It is made of two spatially random patterns that are shown alternatively. (b) shows the correlation matrix of the inputs. The off-diagonal terms are null because the two patterns are spatially orthogonal. (c), (d), and (e) represent the first order of Theorem 3.5 expansion for different μ. Actually, this approximation is quite good since the percentage of error between the averaged system and the first order, computed by error= W order 1 1 W 1 , have an order of magnitude of 10−4% for the three figures. These figures make it possible to observe the role of μ. If μ is small, i.e., the inputs are slow, then the transient can be neglected and the learned connectivity is roughly the correlation of the inputs; see (a). If μ increases, i.e., the inputs are faster, then the connectivity starts to encode a link between the two patterns that were flashed circularly and elicited responses that did not fade away when the other pattern appeared. The temporal structure of the inputs is also learned when μ is large. The parameters used in this figure are ϵ=0.001, l=12, κ=100, σ=0.02.

Not only the spatial correlation is encoded in the weights, but there is also some information about the temporal correlation, i.e., two successive but orthogonal events occurring in the inputs will be wired in the connectivity although they do not appear in the spatial correlations; see Figure 3 for an example.

3.3 Trace learning: band-pass filter effect

In this section, we study an improvement of the learning model by adding a certain form of history dependence in the system and explain the way it changes the results of the previous section. Given that Theorem 2.2 only applies to an instantaneous process, we will only be able to treat the history-dependent systems which can be reformulated as instantaneous processes. Actually, this class of systems contains models which are biologically more relevant than the previous model and which will exhibit interesting additional functional behaviors. In particular, this covers the following features:

  • Trace learning.

It is likely that a biological learning rule will integrate the activity over a short time. As Földiàk suggested in [27], it makes sense to consider the learning equation as being

d W ϵ d t =κ W ϵ + ( v ϵ g 1 ) ( v ϵ g 1 ) ,

where is the convolution and g 1 :tR β 1 e β 1 t H(t). Rolls and Deco numerically show [15] that the temporal convolution, leading to a spatio-temporal learning, makes it possible to perform invariant object recognition. Besides, trace learning appears to be the symmetric part of the biological STDP rule that we detail in Section 3.4.

  • Damped oscillatory neurons.

Many neurons have an oscillatory behavior. Although we cannot take this into account in a linear model, we can model a neuron by a damped oscillator, which also introduces a new important time-scale in the system. Adding adaptation to neuronal dynamics is an elementary way to implement this idea. This corresponds to modeling a single neuron without inputs by the equivalent formulations

{ d v ϵ d t = l z ϵ , d z ϵ d t = β 2 ( v ϵ z ϵ ) { d v ϵ d t = l v ϵ g 2 , where  g 2 ( t ) = β 2 e β 2 t H ( t ) .
  • Dynamic synapses.

The electro-chemical process of synaptic communication is very complicated and non-linear. Yet, one of the features of synaptic communication we can take into account in a linear model is the shape of the post-synaptic potentials. In this section, we consider that each synapse is a linear filter whose finite impulse response (i.e., the post-synaptic potential) has the shape g 3 (t)= β 3 e β 3 t H(t). This is a common assumption which, for instance, is at the basis of traditional rate based models; see Chapter 11 of [7].

For mathematical tractability, we assume in the following that β= β 1 = β 2 = β 3 R + such that g β = g 1 = g 2 = g 3 , i.e., the time-scales of the neurons, those of the synapses and those of the learning windows are the same. Actually, there is a large variety of temporal scales of neurons, synapses, and learning windows, which makes this assumption not absurd. Besides, in many brain areas, examples of these time constants are in the same range (10 ms). Yet, investigating the impact of breaking this assumption would be necessary to model better biological networks. This leads to the following system:

{ d v ϵ = 1 ϵ 1 ( ( W ϵ L ) v ϵ g β + u ( t ϵ 2 ) ) d t + σ ϵ 1 d B ( t ) , d W ϵ d t = κ W ϵ + ( v ϵ g β ) ( v ϵ g β ) ,
(16)

where the notations are the same as in Section 3.2. The behavior of a single neuron will be oscillatory damped if Δ= 1 4 l β is a pure imaginary number, i.e., 4l>β. This is the regime on which we focus. Actually, the Hebbian linear case of Section 3.2 corresponds to β=+ in this delayed system.

To comply with the hypotheses of Theorem 2.2 (i.e., no dependence of the history of the process), we can add a variable z to the system which takes care of integrating the variable v over an exponential window. It leads to the equivalent system (in the limit σ z 0)

{ d ( v ϵ z ϵ ) = 1 ϵ 1 [ ( 0 W L β β ) ( v ϵ z ϵ ) + ( u ( t ϵ 2 ) 0 ) ] d t + ( σ ϵ 1 d B ( t ) σ z ϵ 1 d B ( t ) ) , d W ϵ d t = κ W ϵ + z ϵ z ϵ .

This trick makes it possible to deal with some history-based processes where the dependence on the past is exponential.

It turns out most of the results of Section 3.2 remain true for system (16) as detailed in the following. The existence of the solution on R + is proved in Theorem B.6. The computations show that in the averaged system, the noise term remains identical, whereas the correlation term is to be replaced by μ τ ( v ¯ g β ) ( v ¯ g β ) . Besides, Lemma 3.2 can be extended to our delayed system by changing only the temporal filters; see Lemma 34. Together with Lemma C.3, this proves the result of Theorem B.8.

μ τ ( v ¯ g β ) ( v ¯ g β ) = u m 2 v 1 2 l 2 k , q = 0 + W k ( l / v 1 ) k C ˜ k , q W q ( l / v 1 ) q ,

where

C ˜ k , q = 1 u m 2 τ v 1 k + q + 2 ( u v ( k + 1 ) ) ( u v ( q + 1 ) ) ,

where v:t l μ Δ ( e β 2 μ ( 1 Δ ) t e β 2 μ ( 1 + Δ ) t )H(t). Observe that applying Young’s inequality to convolutions leads to C ˜ k , q 2 1. Actually, Lemma C.3 shows that v ( k ) = v k :t π β k ! e β 2 t ( t | Δ | ) k + 1 2 J k + 1 2 ( β | Δ | 2 t)H(t), where J n (z) is the Bessel function of the first kind. The value of the L1 norm of v is computed in Appendix C.3. It leads to v 1 =coth( π 2 Δ ) if Δ is a pure imaginary number and v 1 =1 else.

Therefore, the averaged system can be rewritten

d W d t = G ¯ (W)=κW+ u m 2 v 1 2 l 2 k , q = 0 + W k ( l / v 1 ) k C ˜ k , q W q ( l / v 1 ) q + σ 2 2 ( L W ) 1 .

As before, the existence and uniqueness of a globally attractive equilibrium point is guaranteed if Assumption 3.1 is verified for p 1 2 v 1 3 + 1 ; see Theorem B.9.

Besides, the weakly connected expansion of the equilibrium point we did in Section 3.2.4 can be derived in this case (see Theorem B.10). At the first order, this leads to the equilibrium connectivity

W = p ˜ l 1 + λ ( λ + v 1 2 C ˜ 0 , 0 ) +O ( p ˜ 2 v 1 ) .

The second order is given in Theorem B.10. The main difference with the Hebbian linear case is the shape of the temporal filters. Actually, the temporal filters have an oscillatory damped behavior if Δ is purely imaginary. These two cases are compared in Figure 4.

Fig. 4
figure 4

These represent the temporal filter v:tv(t) for different parameters. (a) When β=+, we are in the Hebbian linear case of Appendix B.2. The temporal filters are just decaying exponentials which averaged the signal over a past window. (b) When the dynamics of the neurons and synapse are oscillatory damped, some oscillations appear in the temporal filters. The number of oscillations depends on Δ. If Δ is real, then there are no oscillations as in the previous case. However, when Δ becomes a pure imaginary number, it creates a few oscillations which are even more numerous if |Δ| increases.

These oscillatory damped filters have the effect of amplifying a particular frequency of the input signal. As shown in Figure 5, if Δ is a pure imaginary number, then D 0 , 0 is the cross-correlation of the band-pass filtered inputs with themselves. This band-pass filter effect can also be observed in the higher-order terms of the weakly connected expansion. This suggests that the biophysical oscillatory behavior of neurons and synapses leads to selecting the corresponding frequency of the inputs and performing the same computation as for the Hebbian linear case of the previous section: computing the correlation of the (filtered) inputs.

Fig. 5
figure 5

This is the spectral profile | v v ˆ |(ξ) for β=1 and l]0,2], where v v ˆ denotes the Fourier transform of v v . When 4l<β, the filter reaches its maximum for the null frequency, but if l increases beyond β 4 , the filter becomes a band-pass filter with long tails in  1 ξ 2 .

3.4 Asymmetric ‘STDP’ learning with correlated noise

Here, we extend the results to temporally asymmetric learning rules and spatially correlated noise. We consider a learning rule that is similar to the spike-timing-dependent plasticity (STDP) which is closer to biological experiments than the previous Hebbian rules. It has been observed that the strength of the connection between two neurons depends mainly on the difference between the time of the spikes emitted by each neuron as shown in Figure 6; see [12].

Fig. 6
figure 6

This figure represents the synapse strength modification when the post-synaptic neuron emits a spike. The y-axis corresponds to an additive or multiplicative update of the connectivity. For instance, in [28], this is Δ W i j W i j for the negative part of the curve. However, we assume an additive update in this paper. The x-axis is the time at which a pre-synaptic spike reaches the synapse, relatively to the time of post-synaptic time chosen to be 0.

Assuming that the decay time of the positive and negative parts of Figure 6 are equal, we approximate this function by t a + g γ (t) a g γ (t), where g γ (t)=γ e γ t H(t). Actually, this corresponds to W ϵ ˙ i j =κ W i j ϵ + a + v i ( v j ϵ g γ ) a ( v i ϵ g γ ) v j ϵ . If the neuron has a spiking behavior, then the term a + v i ϵ (t)( v j ϵ g γ )(t) is significant when the post-synaptic neuron i is spiking at time t, and then it counts the number of previous spikes from the pre-synaptic neuron j that might have caused the post-synaptic spike. This calculus is weighted by an exponentially decaying function. This accounts for the left part of Figure 6. The last term a ( v i ϵ g γ ) v j ϵ takes the opposite perspective. It is significant when the pre-synaptic neuron j is spiking and counts the number of previous spikes from the post-synaptic neuron i that are not likely to have been caused by the pre-synaptic neuron. The computation is also weighted by the mirrored function of an exponentially decaying function. This accounts for the right part of Figure 6. This leads to the coupled system

{ d v ϵ = 1 ϵ 1 ( f ( v ϵ ) + W v ϵ + u ( t ϵ 2 ) ) d t + 1 ϵ 1 Σ d B ( t ) , d W ϵ d t = G ( v ϵ , W ϵ ) = κ W ϵ + a + v ϵ ( v ϵ g γ ) a ( v ϵ g γ ) v ϵ ,
(17)

where the non-linear intrinsic dynamics of the neurons is represented by f. Indeed, the term { a + v ϵ ( t ) ( v ϵ g γ ) ( t ) } i j = a + v i ϵ (t) ( v ϵ g γ ) j (t) is negligible when the neuron is quiet and maximal at the top of the spikes emitted by neuron i. Therefore, it records the value of the pre-synaptic membrane potential weighted by the function g γ when the post-synaptic neuron spikes. This accounts for the positive part of Figure 6. Similarly, the negative part corresponds to a ( v ϵ g γ ) v ϵ .

Actually, this formulation is valid for any non-linear activity with correlated noise. However, studying the role of STDP in spiking networks is beyond the scope of this paper since we are only able to have explicit results for models with linear activity. Therefore, we will assume that the activity is linear while keeping the learning rule as it was derived in the spiking case, i.e., we assume f(v)=lv=Lv in the system above.

We also use the trick of adding additional variables to get rid of the history-dependency. This reads

{ d ( v ϵ z ϵ ) = 1 ϵ 1 [ ( W L 0 γ γ ) ( v ϵ z ϵ ) + ( u ( t ϵ 2 ) 0 ) ] d t + ( σ ϵ 1 d B ( t ) σ z ϵ 1 d B ( t ) ) , d W ϵ d t = κ W ϵ + a + v ϵ z ϵ a z ϵ v ϵ .

In this framework, the method exposed in Section 3.2 holds with small changes. First, the well-posedness assumption becomes

Assumption 3.2 There exists p]0,1[ such that

| a + | + | a | p ( 1 p ) ( s 2 γ 2 ( 1 + γ / l p ) + u m 2 ( 1 p ) ) <κ l 3 ,

where s 2 is the maximal eigenvalue of Σ Σ .

Under this assumption, the system is asymptotically well posed in probability (Theorem B.11). And we show the averaged system is

d W d t = G ¯ (W)=κW+ u m 2 ( | a + | + | a | ) l 2 k , q = 0 + W k l k D k , q W q l q +Q,
(18)

where we have used Theorem B.12 to expand the correlation term. The noise term Q is equal to Q 11 ( L + γ W ) 1 , where Q 11 is the unique solution of the Lyapunov equation (WL) Q 11 + Q 11 ( W L)+Σ Σ =0. Lemma D.1 gives a solution for this equation which leads to Q=γ k = 0 + W k Σ Σ ( 2 L W ) ( k + 1 ) ( L + γ W ) 1 . In equation (18), the correlation matrices D k , q are given by

D k , q = 1 u m 2 τ ( | a + | + | a | ) ( u g l / μ ( k + 1 ) ( a + g γ a g γ ) ) ( u g l / μ ( q + 1 ) ) .

According to Theorem B.13, the system is also globally asymptotically convergent to a single equilibrium, which we study in the following.

We perform a weakly connected expansion of the equilibrium connectivity of system (18). As shown in Theorem B.14, the first order of the expansion is

W = p ˜ l 1 + λ ( λ ( α + α ) Σ Σ d + D 0 , 0 ) +O ( p ˜ 2 ) .

Writing D 0 , 0 = 1 u m 2 τ ( | a + | + | a | ) (S+A), where S is symmetric and A is skew-symmetric, leads to

S = a + a 2 u g l / μ ( g γ + g γ ) ( u g l / μ ) , A = a + + a 2 u g l / μ ( g γ g γ ) ( u g l / μ ) .

According to Lemma C.1, the symmetric part is very similar to the trace learning case in Section 3.3. Applying Lemma C.2 leads to

S = ( a + a ) ( u g l / μ g γ ) ( u g l / μ g γ ) , A = a + + a γ ( d u d t g l / μ g γ ) ( u g l / μ g γ ) .
(19)

Therefore, the STDP learning rule simply adds an antisymmetric part to the final connectivity keeping the symmetric part as the Hebbian case. Besides, the antisymmetric part corresponds to computing the cross-correlation of the inputs with its derivative. For high-order terms, this remains true although the temporal profiles are different from the first order. These results are in line with previous works underlying the similarity between STDP learning and differential Hebbian learning, where G(v) v ˙ v; see [29].

Figure 7 shows an example of purely antisymmetric STDP learning, i.e., a + = a . The final connectivity matrix is therefore antisymmetric as shown in Figure 7(b) and the noise has no impact on learning. It shows the network finally approximates the connectivity given in (19).

Fig. 7
figure 7

Antisymmetric STDP learning for a network of n=3 neurons. (a) Temporal evolution of the inputs to the network. The three neurons are successively and periodically excited. The red color corresponds to an excitation of 1 and the blue to no excitation. (b) Equilibrium connectivity. The matrix is antisymmetric and shows that neurons excite one of their neighbors and are inhibited by the other. (c) Temporal evolution of the connectivity strength. The colors correspond to those of (b). The connectivity of system (17) corresponds to the plain thin oscillatory curves. The connectivity of the averaged system (18) (with k,q<4) corresponds to the plain thick lines. Note that each curve corresponds to the superposition of three connections which remain equal through learning. The dashed curves correspond to the antisymmetric part in (19). The parameters chosen for this simulation were l=10, κ=100, γ=3, a + = a =1, τ=3, σ=0.001, μ=1, ϵ=0.001. The system was simulated on the fast time-scale during T=10,000 time steps of size dt=0.01.

4 Discussion

We have applied temporal averaging methods on slow/fast systems modeling the learning mechanisms occurring in linear stochastic neural networks. When we make sure the connectivity remains small, the dynamics of the averaged system appears to be simple: the connectivity always converges to a unique equilibrium point. Then, we performed a weakly connected expansion of this final connectivity whose terms are combinations of the noise covariance and the lagged correlations of the inputs: the first-order term is simply the sum of the noise covariance and the correlation of the inputs.

  • As opposed to the former input/ouput vision of the neurons, we have considered the membrane potential v to be the solution of a dynamical system. The consequence of this modeling choice is that not only the spatial correlations, but also the temporal correlations are learned. Due to the fact we take the transients into account, the activity never converges but it lives between the representation of the inputs. Therefore, it links concepts together.

The parameter μ is the ratio of the time-scales between the inputs and the activity variable. If μ=0, the inputs are infinitely slow and the activity variable has enough time to converge towards its equilibrium point. When μ grows, the dynamics becomes more and more transient, it has no time to converge. Therefore, if the inputs are extremely slow, the network only learns the spatial correlation of the inputs. If the inputs are fast, it also learns the temporal correlations. This is illustrated in Figure 3.

This suggests that learning associations between concepts, for instance, learning words in two different languages, may be optimized by presenting two words to be associated circularly with a certain frequency. Indeed, increasing the frequency (with a fixed duration of exposition to each word) amounts to increasing μ. Therefore, the network learns better the temporal correlations of the inputs and thus strengthens the link between these two concepts.

  • According to the model of resonator neuron [30], Section 3.3 suggests that neurons and synapses with a preferred frequency of oscillation will preferably extract the correlation of the inputs filtered by a band pass filter centered on the intrinsic frequency of the neurons.

Actually, it has been observed that the auditory cortex is tonotopically organized, i.e., the neurons are arranged by frequency [31]. It is traditionally thought that this is achieved thanks to a particular connectivity between the neurons. We exhibit here another mechanism to select this frequency which is solely based on the parameters of the neurons: a network with a lot of different neurons whose intrinsic frequencies are uniformly spread is likely to perform a Fourier-like operation: decomposing the signal by frequency.

In particular, this emphasizes the fact that the network does not treat space and time similarly. Roughly speaking, associating several pictures and associating several sounds are therefore two different tasks which involve different mechanisms.

  • In this paper, the original hierarchy of the network has been neglected: the network is made of neurons which receive external inputs. A natural way to include a hierarchical structure (with layers for instance), without changing the setup of the paper, is therefore to remove the external input to some neurons. However, according to Theorem 3.5 (and its extensions Theorems B.10 and B.14), one can see that these neurons will be disconnected from the others at the first order (if the noise is spatially uncorrelated). Linear activities imply that the high level neurons disconnect from others, which is a problem. In fact, one can observe that the second-order term in Theorem 3.5 is not null if the noise matrix Σ is not diagonal. In fact, this is the noise between neurons which will recruit the high level neurons to build connections from and to them.

It is likely that a significant part of noise in the brain is locally induced, e.g., local perturbations due to blood vessels or local chemical signals. In a way, the neurons close to each other share their noise and it seems reasonable to choose the matrix Σ so that it reflects the biological proximity between neurons. In fact, Σ specifies the original structure of the network and makes it possible for close-by neurons to recruit each other.

Another idea to address hierarchy in networks would be to replace the synaptic decay term κW by another homeostatic term [32] which would enforce the emergence of a strong hierarchical structure.

  • It is also interesting to observe that most of the noise contribution to the equilibrium connectivity for STDP learning (see Theorem B.14) vanishes if the learning is purely skew-symmetric, i.e., a + = a . In fact, it is only the symmetric part of learning, i.e., the Hebbian mechanism, that writes the noise in the connectivity.

  • We have shown that there is a natural analogous STDP learning for spiking neurons in our case of linear neurons. This asymmetric rule converges to a final connectivity which can be decomposed into symmetric and skew-symmetric parts. The first one is similar to the symmetric Hebbian learning case, emphasizing that the STDP is nothing more than an asymmetric Hebbian-like learning rule. The skew-symmetric part of the final connectivity is the cross-correlation between the inputs and their derivatives.

This has an interesting signification when looking at the spontaneous activity of the network post-learning. In fact, if we assume that the inputs are generated by an autonomous system d u d t =ζ(u), then according to the bottom equation in formula (19), the spontaneous activity is governed by

dv= ( ζ ( u ) u v l v ) dt+ΣdB(t).

In a way, the noise terms generate random patterns which tend to be forgotten by the network due to the leak term lv. The only drift is due to ζ(u) u v E v , u (ζ(u)) which is the expectation of the vector field defining the dynamics of inputs with a measure being the scalar product between the activity variable and the inputs. In other words, if the activity is close to the inputs at a given time t R + , i.e., v,u( t ) is large, then the activity will evolve in the same direction as what this input would have done. The network has modeled the temporal structure of the inputs. The spontaneous activity predicts and replays the inputs the network has learned.

There are still numerous challenges to carry on in this direction.

First, it seems natural to look for an application of these mathematical methods to more realistic models. The two main limitations of the class of models we study in Section 3 are (i) the activity variable is governed by a linear equation and (ii) all the neurons are assumed to be identical. The mathematical analysis in this paper was made possible by the assumption that the neural network has a linear dynamics, which does not reflect the intrinsic non-linear behavior of the neurons. However, the cornerstone of the application of temporal averaging methods to a learning neural network, namely Property 3.3, is similar to the behavior of Poisson processes [26] which has useful applications for learning neural networks [19, 20]. This suggests that the dynamics studied in this paper might be quite similar to some non-linear network models. Studying more rigorously the extension of the present theory to non-linear and heterogeneous models is the next step toward a better modeling of biologically plausible neural networks.

Second, we have shown that the equilibrium connectivity was made of a symmetric and antisymmetric term. In terms of statistical analysis of data sets, the symmetric part corresponds to classical correlation matrices. However, the antisymmetric part suggests a way to improve the purely correlation-based approach used in many statistical analyses (e.g., PCA) toward a causality-oriented framework which might be better suited to deal with dynamical data.

Appendix A: Stochastic and periodic averaging

A.1 Long-time behavior of inhomogeneous Markov processes

In order to construct the averaged vector field G ¯ μ (w) in the time-scale matching case (0<μ<), one needs to understand properly the long-time behavior of the rescaled inhomogeneous frozen process

dv=F(v, w 0 ,μt)dt+Σ(v, w 0 )dB(t).
(20)

Under regularity and dissipativity conditions, [5] proves the following general result about the asymptotic behavior of the solution of

d X t = b ( X t , t ) d t + σ ( X t , t ) d B ( t ) , t > s , X s = x ,

where tb(x,t) and tσ(x,t) are τ-periodic.

The first point of the following theorem gives the definition of evolution systems of measures, which generalizes the notion of invariant measures in the case of inhomogeneous Markov processes. The exponential estimate of 2. in the following theorem is a key point to prove the averaging principle of Theorem 2.2.

Theorem A.1 ([5])

  1. 1.

    There exists a unique τ-periodic family of probability measures {μ(s,),sR} such that for all functions ϕ continuous and bounded,

    x R p E [ ϕ ( X t ) ] μ(s,dx)= x R p ϕ(x)μ(t,dx).

Such a family is called evolution systems of measures.

  1. 2.

    Furthermore, under stronger dissipativity condition, the convergence of the law of X to μ is exponentially fast. More precisely, for any r(1,+), there exist M>0 and ω<0 such that for all ϕ in the space of p-integrable functions with respect to μ(t,), L r ( R p ,μ(t,)),

    x R p E [ ϕ ( X t s , x ) ] x R p ϕ ( x ) μ ( t , d x ) r μ ( s , d x ) M e ω ( t s ) x R p ϕ ( x ) r μ ( t , d x ) .

A.2 Proof of Property 2.3

Property A.2 If there exists a smooth subset of R q such that

  1. 1.

    The functions F, G, Σ satisfy Assumptions  2.1-2.3 restricted on R p ×E.

  2. 2.

    is invariant under the flow of G ¯ μ , as defined in (7).

Then for any initial condition w 0 E, system (4) is asymptotically well posed in probability and w ϵ satisfies the conclusion of Theorem  2.2.

Proof The idea of the proof is to truncate the original system, replacing G by a smooth truncation which coincides with G on and which is close to 0 outside . More precisely, for β>0, we introduce ψ β : R q R q a regular function (locally Lipschitz) such that ψ β (w)=0 if wE or wE and lim β 0 ψ β (w)=1 if wEE. We define

G ˜ β (v,w)=G(v,w) ψ β (w).

Then, we introduce ( v ˜ ϵ , β , w ˜ ϵ , β ) the solution of the auxiliary system

d v ˜ ϵ , β = 1 ϵ 1 [ F ( v ˜ ϵ , β , w ˜ ϵ , β , t ϵ 2 ) ] d t + 1 ϵ 1 Σ ( v ˜ ϵ , β , w ˜ ϵ , β ) d B ( t ) , d w ˜ ϵ , β = G ˜ β ( v ˜ ϵ , β , w ˜ ϵ , β ) d t

with the same initial condition as ( v ϵ , w ϵ ).

Let T,δ,η>0 be three positive reals. Let us introduce a few more notations. We will need to consider a subset of defined by

E β := { w E ; ψ β ( w ) 1 ( η ) 1 / 2 δ } .

We also introduce the following stopping times:

τ ϵ : = inf { t 0 ; w t ϵ E } , τ ϵ β : = inf { t 0 ; w t ϵ E β } , τ ˜ ϵ : = inf { t 0 ; w ˜ t ϵ , β E } , τ ˜ ϵ β : = inf { t 0 ; w ˜ t ϵ , β E β } .

Finally, we define T ϵ :=min(T, τ ϵ , τ ˜ ϵ ) and T ϵ β :=min(T, τ ϵ β , τ ˜ ϵ β ).

Let us remark at this point that in order to prove that P[ τ ϵ T]1 (which is our aim), it is sufficient to work on the bounded stopping time min(T, τ ϵ ), since P[ τ ϵ T]=P[min(T, τ ϵ )T]. In other words, the realizations of w ϵ which stay longer than T inside are not problematic. Therefore, we introduce τ ϵ ˆ :=min(T, τ ϵ ).

Our first claim is that on finite time intervals [0,T], w ˜ ϵ , β is a good approximation of w ϵ inside as long as one chooses β sufficiently small. To prove our claim, we proceed in two steps, first working inside E β and then in E E β :

  1. 1.

    For any β>0, one controls the difference between w ϵ and w ˜ ϵ , β on E β since one controls the difference between the drifts. By an application of Lemma A.3 below (we need here the moment Assumption 2.3(i)), there exists a constant C (which may depend on T,β,) such that

    E [ sup 0 t T ϵ β w t ϵ w ˜ t ϵ , β 2 ] Cη δ 2 .
    (21)

We conclude by an application of the Markov inequality, implying

P [ sup 0 t T ϵ β w t ϵ w ˜ t ϵ , β > δ ] 1 δ 2 E [ sup 0 t T ϵ β w t ϵ w ˜ t ϵ , β 2 ] Cη.
(22)
  1. 2.

    One needs now to control the situation outside E β , that is, on E E β . The idea is that while one does not control the difference between G and G ˜ β anymore, one can still choose β sufficiently small such that E β becomes arbitrary close to , hence implying that τ ˆ ϵ and T ϵ β are arbitrary close with high probability, namely

    θ,λ>0,β>0,P [ τ ϵ T ϵ β > λ ] <θ.
    (23)

With θ= ( δ η ) 2 and λ=δη, one obtains that for sufficiently small β,

P [ τ ϵ ˆ T ϵ β > η δ ] < ( η δ ) 2 .
(24)

Let us denote S:= sup T ϵ β t τ ϵ ˆ w t ϵ w ˜ t ϵ , β . Then, one can split the calculus of E[S] according to the event A={ τ ϵ ˆ T ϵ β >ηδ}:

E [ S ] = E [ S I A ] + E [ S I A c ] ( 2 K G T P [ A ] ) 1 / 2 + ( 2 K G E [ ( τ ˆ ϵ T ϵ β ) 2 I A c ] ) 1 / 2 C 2 η δ ,

where we have used the Cauchy-Schwarz inequality and the moment Assumption 2.3(ii) (yielding the constant K G ) in the second line.

So, we deduce by the Markov inequality that sup T ϵ β t τ ˆ ϵ w t ϵ w ˜ t ϵ , β is arbitrary small in probability.

From the combination of 1. and 2., we deduce that one can choose β small enough such that

P [ sup 0 t T τ ϵ w t ϵ w ˜ t ϵ , β > δ ] ( C 1 + C 2 )η.
(25)

We can now proceed to the application of Theorem 2.2 to the truncated system. As ( v ˜ ϵ , β 0 , w ˜ ϵ , β 0 ) remains in R p ×E, one can extend smoothly F and Σ outside so that (F,Σ) satisfies Assumptions 2.1-2.2. Therefore, one can apply Theorem 2.2 to the auxiliary system: for all δ,T>0,

lim ϵ 0 μ P [ sup t [ 0 , T ] w ˜ t ϵ , β 0 w t > δ ] =0,

where w is defined by (8). As a consequence, there exists ϵ 0 such that for all ϵ with ϵ< ϵ 0 ,

P [ sup t [ 0 , T ] w ˜ t ϵ , β 0 w t > δ ] <η.

Then, as | w ˆ t ϵ w t || w ˆ t ϵ w ˜ t ϵ , β 0 |+| w ˜ t ϵ , β 0 w t |, one deduces that for all ϵ with ϵ< ϵ 0 ,

P [ sup t [ 0 , T ] w ˆ t ϵ w t > δ ] <( C 1 + C 2 +1)η,

that is to say,

lim ϵ 0 μ P [ sup t [ 0 , T ] w ˆ t ϵ w t > δ ] =0.

We know by assumption 2. of the statement of Property 2.3, for all t0, w t E, so we conclude the proof by observing that for all T>0,

lim ϵ 0 P[ τ ϵ T]=1.

 □

In the following lemma, we show that the solutions of two SDEs, whose drifts are close on a subset of the state space, remain close on a finite time interval. The difficulty here lies in the fact that we deal with only locally Lipschitz coefficients.

Lemma A.3 Suppose x and y are solutions, with identical initial conditions in H R n , of the following stochastic differential equations in R n :

d x t =a( x t ,t)dt+b( x t ,t)dB(t),
(26)
d y t =h( y t )a( y t ,t)dt+b( y t ,t)dB(t).
(27)

Let T>0 be a fixed time. We define

τ H =min ( inf { t 0 ; x t H } , inf { t 0 ; y t H } ) .

We make the following assumptions:

  1. 1.

    Approximation assumption:

    sup y H h ( y ) 1 ξ;
  2. 2.

    Local Lipschitz assumption: for all a,b R n with max(a,b)R, there exists a constant C R such that

    a ( x , t ) a ( y , t ) 2 C R x y 2 ;
  3. 3.

    Boundedness assumption: there exists p>2 and A>0 such that

    E [ sup 0 t T x t p ] AandE [ sup 0 t T y t p ] A,

and if xR, then there exists K R such that a(x) K R .

Under the above assumptions, there exists a constant C (depending on the quantities defined above, but not on ξ) such that

E [ sup 0 t min ( T , τ H ) x t y t 2 ] C ξ 2 .
(28)

Proof Although the Lipschitz constant is not bounded on , we can use the boundedness assumption to show that the probability of reaching a level R before time T will be very small for large R, and then use the classical strategy inside { x t R} where everything works as if the coefficients were globally Lipschitz. A similar strategy is used in [33] to prove a strong convergence theorem for the Euler scheme without the global Lipschitz assumption. We adapt here the ideas of their proof to our setting.

Therefore, we introduce the following stopping times:

θ R : = inf { t 0 ; x t R } , θ R β : = inf { t 0 ; y t R } and ρ : = min ( θ R , θ R β , τ H ) .

We also denote e(t):= x t y t .

Splitting the following expectation according to the value of ρ, and applying the Young inequality,

ab d r a r + 1 q d q / r b q for  r 1 + q 1 =1 and any a,b,d>0,

we obtain, for any d>0,

E [ sup 0 t min ( T , τ H ) e ( t ) 2 ] E [ sup 0 t min ( T , τ H ) e ( min ( t , ρ ) ) 2 ] + 2 d p E [ sup 0 t T s e ( t ) p ] + 1 2 / p d 2 / ( p 2 ) P [ θ R T  or  θ R β T ] .

Then we use the boundedness assumption and the Markov inequality to deduce that

P [ θ R T  or  θ R β T ] 2 A R p  and E [ sup 0 t T e ( t ) p ] 2 p A.

Now, we can focus on the supremum of the error before time ρ. We first apply the Cauchy-Schwarz inequality

e ( min ( t , ρ ) ) 2 = 0 min ( t , ρ ) ( a ( x s , s ) h ( y s ) a ( y s , s ) ) d s + 0 min ( t , ρ ) ( b ( x s , s ) b ( y s , s ) ) d B ( s ) 2 2 [ T 0 min ( t , ρ ) a ( x s , s ) h ( y s ) a ( y s , s ) 2 d s + 0 min ( t , ρ ) ( b ( x s , s ) b ( y s , s ) ) d B ( s ) 2 ] .

Then, we use the local Lipschitz and the boundedness assumptions, together with the Doob inequality (the first inequality) to deal with the stochastic integral: for any u>0,

E [ sup 0 t u e ( min ( t , ρ ) ) 2 ] 2 E [ T 0 min ( u , ρ ) a ( x s , s ) h ( y s ) a ( y s , s ) 2 d s + 4 0 min ( u , ρ ) σ ( x s , s ) σ ( y s , s ) 2 d s ] 2 E [ T C R 0 min ( u , ρ ) x s y s 2 d s + T 2 K R 2 ξ 2 + 4 C R 0 min ( u , ρ ) x s y s 2 d s ] 2 C R ( T + 4 ) E [ 0 min ( u , ρ ) sup 0 r s { x min ( r , ρ ) y min ( r , ρ ) 2 } d s ] + 2 T 2 K R 2 ξ 2 .

We then apply the Gronwall lemma

E [ sup 0 t T e ( min ( t , ρ ) ) 2 ] 2 T 2 K R 2 ξ 2 e 2 C R ( T + 4 ) .
(29)

Finally, we choose d small enough such that

2 p + 1 d A p ξ 2 ,

and R large enough such that

2 A ( p 2 ) R p p d 2 / ( p 2 ) ξ 2

yielding

E [ sup 0 t min ( T , τ H ) x t y t 2 ] ( 2 + 2 T 2 K R 2 e 2 C R ( T + 4 ) ) ξ 2 .

 □

Appendix B: Proofs of Section 3

B.1 Notations and definitions

Throughout the paper, lower-case normal letters are constants, lower-case bold letters are vectors or vector-valued functions, and upper-case bold letters are matrices.

  • l,κ,τ, ϵ 1 , ϵ 2 ,μ, σ 2 ,β,γ, a ± R +

    are parameters of the network. We also define Δ= 1 4 l β for Section 3.3 and Σ R n × n , a fixed noise matrix, for Section 3.4. We write s 2 =Σ Σ .

  • nN

    is the number of neurons in the network.

  • v C 1 ( R + , R n )

    is the field of membrane potential in the network.

  • u C 1 ( R + , R n )

    is the field of inputs to the network. We write

    u m = sup t R + u ( t ) 2 .
  • vu C 1 ( R + , R n × n )

    is the tensor product between u and v, which simply means { u v } i j (t)= u i (t) v j (t).

  • W C 1 ( R + , R n × n )

    is the connectivity of the network. Throughout the paper, we assume W(0)=0.

  • x,y

    is the scalar product between two vectors x,y R n .

  • u ( t ) p

    for p=1,2 is the L p norm of u(t) R n , i.e., u ( t ) p = ( i = 1 n | u i ( t ) | p ) 1 p . And similarly for the connectivity matrices of R n × n with a double sum.

  • W= sup x C n , x = 1 |x,Wx|= max i { 1 , , n } {| λ i |: λ i  is an eigenvalue of W}

    .

  • J

    is the transpose of the matrix J R n × n .

  • x y R n × n

    is the cross-correlation matrix of two compactly supported and differentiable functions from to R n , i.e.,

    { x y } i j = + x i (t) y j (t)dt.
  • H is the Heaviside function, i.e.,

    H(t)={ 0 if  t 0 , 1 if  t > 0 .
  • The real functions

    g γ : t γ e γ t H ( t ) , v : t l μ Δ ( e β 2 μ ( 1 Δ ) t e β 2 μ ( 1 + Δ ) t ) H ( t ) , w : t l 2 μ Δ ( ( 1 + Δ ) e β 2 μ ( 1 Δ ) t ( 1 Δ ) e β 2 μ ( 1 + Δ ) t ) H ( t )
    (30)

are integrable on .

B.1.1 Notations for the Appendix

The computations involve a lot of convolutions and, for readability of the Appendix, we introduce some new notations. Indeed, we rewrite the time-convolution between u and g, an integrable function on ,

ug=uG.

This suggests one should think of v as a semi-continuous matrix of R n × R and of G γ as a continuous matrix of R R × R , such that u i t = u i (t) and G s t =g(ts). Indeed, in this framework the convolution with g is nothing but the continuous matrix multiplication between v and a continuous Toeplitz matrix generated row by row by g. Hence, the operator ‘’ can be though of as a matrix multiplication.

Therefore, it is natural to define ( u g ) = ( u G ) = G u , where G R R × R is the transpose of G, i.e., the continuous Toeplitz matrix generated row by row by g():tg(t) and u R R × n . Thus, for g and h, two integrable functions on , we can rewrite

(xg) ( y h ) =xG H y ,

where G and are their associated continuous matrices. More generally, the bold curved letters G, V, W represent these continuous Toeplitz matrices which are well defined through their action as convolution operators with g, v, and w. The previous formulation naturally expresses the symmetry of relation (14).

B.2 Hebbian learning with linear activity

In this part, we consider system (12).

B.2.1 Application of temporal averaging theory

Theorem B.1 If Assumption  3.1 is verified for p]0,1[, then system (12) is asymptotically well posed in probability and the connectivity matrix W ϵ , the solution of system (12), converges to W, in the sense that for all δ,T>0,

lim ϵ 0 μ P [ sup t [ 0 , T ] | W t ϵ W t | 2 > δ ] =0,

where W is the deterministic solution of

d W i j d t = G ¯ ( W ) i j = κ W i j decay + μ τ 0 τ μ v ¯ i ( s ) v ¯ j ( s ) d s correlation + σ 2 2 ( L W ) i j 1 noise ,

where v ¯ (t) is the τ μ -periodic attractor of d v ¯ d t =(WL) v ¯ +u(μt), where W R n × n is supposed to be fixed.

Proof We are going to apply Property 2.3. For p]0,1[, consider the space

E p = { W R n × n : W  is symmetric , W 0  and  W < l p } .

First, since LW is strictly positive for W in E p , Assumptions 2.1-2.2 are satisfied on R n × E p . Then, we only need to compute the averaged vector field G ¯ and show that E p is invariant under the flow of G ¯ .

  1. 1.

    Computation of the averaged vector field G ¯ :

The fast variable is linear, the averaged vector field is given by (10). This reads

G ¯ (W)= ( τ μ ) 1 0 τ μ x R n G ( v ¯ ( t ) + x , W ) N 0 , Q (dx)dt,

where N v , Q is the probability density function of the Gaussian law with mean v and covariance Q. And Q is the unique solution of (9), with Σ=σId. This leads to Q= σ 2 2 ( L W ) 1 .

Therefore,

G ¯ ( W ) = κ W + μ τ 0 τ μ ( x R n ( v ¯ ( t ) + x ) ( v ¯ ( t ) + x ) N 0 , Q ( d x ) ) d t = κ W + μ τ 0 τ μ v ¯ ( t ) v ¯ ( t ) d t + μ τ 0 τ μ ( v ¯ ( t ) x R n x N 0 , Q ( d x ) E x p e c t a t i o n o f N ( 0 , Q ) = 0 ) d t + μ τ 0 τ μ ( x R n x N 0 , Q ( d x ) E x p e c t a t i o n o f N ( 0 , Q ) = 0 v ¯ ( t ) ) d t + μ τ 0 τ μ ( x R n x x N 0 , Q ( d x ) C o v a r i a n c e o f N ( 0 , Q ) = Q ) d t = κ W + μ τ 0 τ μ v ¯ ( t ) v ¯ ( t ) d t + σ 2 2 ( L W ) 1 .

The integral term in the equation above is the correlation matrix of the τ μ -periodic function v ¯ ¯ . To rewrite this term, we define v ¯ R n × [ 0 , τ μ [ such that v ¯ (i,t)= v ¯ ( t ) i . v ¯ can be seen as a matrix gathering the history of v ¯ , i.e., each column of v ¯ corresponds to the vector v ¯ (t) for a given t[0, τ μ [. It turns out

0 τ μ v ¯ (t) v ¯ (t)dt= v ¯ v ¯ .

Therefore,

G ¯ (W)=κW+ μ τ v ¯ v ¯ + σ 2 2 ( L W ) 1 .

According to the results in Section 2, the solutions of a differential system with such a right-hand side are close to that of the initial system (12). Hence, we focus exclusively on it and try to unveil the properties of its solutions which will be retrospectively extended to those of the initial system (12).

  1. 2.

    Invariance of E p under the flow of (13):

Here we assume that W(0) E p and we want to prove that the trajectory of W is in E p , too.

  1. (a)

    Symmetry:

It is clear that each term in G ¯ is symmetric. Their sum is therefore symmetric and so is W(t).

  1. (b)

    Inequality W0:

The correlation term v ¯ v ¯ is a Gramian matrix and is therefore positive. Because LW is assumed to be positive, therefore, its inverse is also positive. Therefore, if e i is an eigenvector of W0 associated with a null eigenvalue, then e i G ¯ (W) e i 0. Thus, the trajectories of (13) remain positive.

  1. (c)

    Inequality W<lp:

The argument here is that of the inward pointing subspace. We intend to find a condition under which the flow G ¯ is pointing inward the space {W:W<lp}. Roughly speaking, this will be done by defining a real valued function g strictly negative on the subspace and positive outside and then showing that its gradient (or differential) on the border goes in the opposite direction of the flow, i.e., d W g( G ¯ (W))<0 for W g 1 (0).

For all x C n such that x=1, define a family of positive numbers ( α x ) whose supremum is written α and a family of functions ( g x ) such that

g x : R n × n R , J J x 2 α x 2 .

Observe that the differential of g x at W applied to J is d g W x (J)= 1 2 Wx,Jx. For W g x 1 (0), i.e., Wx= α x , compute

2 d g W x ( G ¯ ( W ) ) = κ W x , W x = α x 2 + μ τ W x , v ¯ v ¯ x = A + W x , σ 2 2 ( L W ) 1 x = B .

Therefore, for α <l

2 d g W x ( G ¯ ( W ) ) α x κ α x + u m 2 ( l α ) 2 + σ 2 2 ( l α ) = 1 ( l α ) 2 P ( α ) + κ ( α α x ) ,

where

P(α)=κ α 3 +2κl α 2 ( κ l 2 + σ 2 2 ) α+ ( u m 2 + l σ 2 2 ) .
(31)

Now write α =pl with p]0,1[. Equation (31) becomes

P(p)=κ l 3 p ( 1 p ) 2 + l σ 2 2 (1p)+ u m 2 .

When there exists p such that P(p)<0 (which corresponds to Assumption 3.1), then their exists a ball of radius pl on which the dynamics is pointing inward. It means any matrix W whose maximal eigenvalue is α =pl will see this eigenvalue (and those which are sufficiently close to it, i.e., for which α α x >0 is sufficiently small) decreasing along the trajectories of the system. Therefore, the space E p is invariant by the flow of the system iff Assumption 3.1 is satisfied.

  • Upper bound of A:

Applying Cauchy-Schwarz leads to

| A | W x v ¯ v ¯ x α x 0 τ μ v ¯ ( s ) v ¯ ( s ) x d s α x 0 τ μ | v ¯ ( s ) , x | v ¯ ( s ) d s α x 0 τ μ v ¯ ( s ) 2 d s .

However, for t0

v ¯ ( t ) t e ( W L ) ( t s ) u ( μ s ) d s u m t e ( α l ) ( t s ) d s u m e ( α l ) t [ e ( α l ) s l α ] t = u m l α .

Therefore, A α x τ u m 2 μ ( l α ) 2 .

  • Upper bound of B:

Observe that for J a positive definite matrix whose eigenvalues are the λ i , then the spectrum of J 1 is { 1 λ i }. Therefore, J 1 = 1 min ( λ i ) . Therefore, if J=LW, then J 1 1 l α .

Using the previous observation and Cauchy-Schwarz leads to

|B| α x σ 2 2 ( L W ) 1 α x σ 2 2 ( l α ) .

The trajectories of system (13) with the initial condition in E p are defined on R + and remain bounded. Indeed, if W(0) E p , the connectivity will stay in E p , in particular 0<LWL along the trajectories, more precisely LW is a strictly positive constant since p]0,1[. Because v ¯ is also bounded by u m l ( 1 p ) , v ¯ v ¯ + σ 2 2 ( L W ) 1 is bounded. The right-hand side of system (13) is the sum a bounded term and a linear term multiplied by a negative constant; therefore, the system remains bounded and it cannot explode in finite time: it is defined on R + . □

B.2.2 An expansion for the correlation term

We first state a useful lemma.

Lemma B.2 If v ¯ is the solution, with zero as initial condition, of d v ¯ d t =(WL) v ¯ +u(t), it can be written by the sum below which converges if W is in E p for p]0,1[.

v ¯ = k = 0 + W k l k + 1 u g l ( k + 1 ) ,

where g l :tl e l t H(t).

Proof It can be proven as a trivial rewriting of the variation of parameters formula for linear systems. A more general approach, which extends to delayed systems, was developed by Galtier and Touboul [25]; see the first example for the proof of this lemma. □

This is useful to find the next result.

Property B.3 The correlation term can be written

μ τ v ¯ v ¯ = u m 2 l 2 k , q = 0 + W k l k C k , q W q l q .

Proof We can use Lemma 3.2 with μ1 and compute the cross product v ¯ v ¯ .

Therefore, consider u(μ):tu(μt) instead of u. A change of variable shows that (u(μ) g l ( k ) )(t)= 1 μ (u g l ( k ) ( μ ))(μt). Therefore,

μ τ { v ¯ v ¯ } i j = μ τ 0 τ μ v ¯ i ( t ) v ¯ j ( t ) d t = 1 τ 0 τ v ¯ i ( s μ ) v ¯ j ( s μ ) d s = 1 τ 0 τ ( k = 0 + W k l k + 1 ( u ( μ ) g l ( k + 1 ) ) ( s μ ) ) i × ( q = 0 + W q l q + 1 ( u ( μ ) g l ( q + 1 ) ) ( s μ ) ) j d s = 1 τ 0 τ ( k = 0 + W k l k + 1 ( u g l ( k + 1 ) ( / μ ) μ ) ( s ) ) i × ( q = 0 + W q l q + 1 ( u g l ( q + 1 ) ( / μ ) μ ) ( s ) ) j d s = { u m 2 l 2 k , q = 0 + W k l k C k , q W q l q } i j .

 □

B.2.3 Global stability of the single equilibrium point

Theorem B.4 If Assumption  3.1 is verified for p 1 3 , then there is a unique equilibrium point in the invariant subset E p which is globally, asymptotically stable.

Proof For this proof, define F(W)= u m 2 l 2 k , q = 0 + W k l k C k , q W q l q + σ 2 2 ( L W ) 1 .

First, we compute the differential of F and show it is a bounded operator. Second, we show it implies the existence and uniqueness of an equilibrium point under some condition. Then, we find an energy for the system which says the fixed point is a global attractor. Finally, we show the stability condition is the same as Assumption 3.1 for p 1 3 .

  1. 1.

    We compute the differential of each term in F: The differential of F at W is the sum of these two terms.

  • Formally write the second term v ¯ v ¯ (W)= k , q = 0 + W k l k C k , q W q l q . To find its differential, compute v ¯ v ¯ (W+J) v ¯ v ¯ (W) and keep the terms at the first order in J. Before computing the whole sum, observe that

    ( W + J ) k C k , q ( W + J ) q W k C k , q W q = m = 0 k 1 W m J W k 1 m C k , q W q + m = 0 q 1 W k C k , q W m J W q 1 m + O ( J 2 ) .

This leads to

d v ¯ v ¯ W ( J ) = 1 l k , q = 0 + ( m = 0 k 1 W m l m J W k 1 m l k 1 m C k , q W q l q + l = 0 q 1 W k l k C k , q W m l m J W q 1 m l q 1 m ) .
  • Write Q:W ( L W ) 1 . We can write (LW)Q(W)=Id and use the chain rule to compute the differential of Q at W, which gives JQ(W)+(LW)d Q W (J)=0. Therefore,

    d Q W (J)= ( L W ) 1 J ( L W ) 1 .
  1. 2.

    We want to compute the norm of d F W ( J ) 2 for J 2 =1. First, observe that for three square matrices A, B, and C,

    A B C 2 2 = i , j = 1 n B i j 2 A ( e i e j ) C 2 2 i , j = 1 n B i j 2 A e i 2 2 C e j 2 2 i , j = 1 n B i j 2 A 2 C 2 ,

for e i the vectors of the canonical basis of R n . This leads to A B C 2 B 2 AC. Therefore, because A A 2 ,

W m l m J W k 1 m l k 1 m C k , q W q l q 2 W m l m W k 1 m l k 1 m C k , q W q l q 2 u m 2 W k 1 l k 1 W q l q .

Therefore,

d F W ( J ) 2 u m 2 l 3 k , q = 0 + ( k W k 1 l k 1 W q l q + q W k l k W q 1 l q 1 ) + σ 2 2 ( L W ) 1 2 2 u m 2 l 3 ( k = 0 + k p k 1 ) ( q = 0 + p q ) + σ 2 2 ( L W ) 1 2 2 u m 2 l 3 ( 1 p ) 3 + σ 2 2 l 2 ( 1 p ) 2 .

This inequality is true for all J with J 2 =1; therefore, it is also true for the operator norm

d F W 2 u m 2 l 3 ( 1 p ) 3 + σ 2 2 l 2 ( 1 p ) 2 .

Therefore, F is a k-Lipschitz operator where k= 2 u m 2 l 3 ( 1 p ) 3 + σ 2 2 l 2 ( 1 p ) 2 . This means F ( W ) F ( J ) 2 k W J 2 .

  1. 3.

    The equilibrium points of system (15) necessarily verify the equation W= 1 κ F(W). If

    2 u m 2 ( 1 p ) 3 + l σ 2 2 ( 1 p ) 2 <κ l 3 ,
    (32)

then 1 κ F is a contraction map from E p to itself. Therefore, the Banach fixed point theorem says that there is a unique fixed point which we write W .

  1. 4.

    We now show that, under assumption (32), W W W 2 2 is an energy function for the system d W d t =W+ 1 κ F(W) (which is a rescaled version of system (15)).

Indeed, compute the derivative of this energy along the trajectories of the system

2 d d t W ( t ) W 2 2 = W W , W + 1 κ F ( W ) = W W , W W + W W , 1 κ F ( W ) W = W W 2 2 + W W , 1 κ F ( W ) 1 κ F ( W ) W W 2 2 + W W 2 1 κ F ( W ) 1 κ F ( W ) 2 1 κ l 3 ( 2 u m 2 ( 1 p ) 3 + l σ 2 2 ( 1 p ) 2 κ l 3 ) W W 2 2 0 .

The energy is lower-bounded, takes its minimum for W= W and the decreases along the trajectories of the system. Therefore, W is globally asymptotically stable if assumption (32) is verified.

  1. 5.

    Observe that if Assumption 3.1 is verified for p 1 3 , then 1 1 p < 2 1 p 1 p . Therefore, Assumption 3.1 implies that (32) is also true. This concludes the proof. □

B.2.4 Explicit expansion of the equilibrium point

Recall the notations p ˜ = u m 2 κ l 3 + σ 2 2 κ l 2 and λ= σ 2 l 2 u m 2 .

Theorem B.5

W = p ˜ l 1 + λ ( λ + C 0 , 0 ) + p ˜ 2 l ( 1 + λ ) 2 ( λ 2 + λ ( C 0 , 0 + C 1 , 0 + C 0 , 1 ) + C 0 , 0 C 1 , 0 + C 0 , 1 C 0 , 0 ) + O ( p ˜ 3 ) .

Actually, it is possible to compute recursively the n th term of the expansion above, although their complexity explodes.

Proof Define p the smallest value in ]0,1[ such that Assumption 3.1 is valid. This implies

p ( ( 1 p ) 2 + σ 2 2 κ l 2 ) = u m 2 κ l 3 + σ 2 2 κ l 2 .

The weak connectivity index p ˜ controls the ratio of the connection over the strength of intrinsic dynamics. Indeed, these two variables are of the same order because

p p ˜ = 1 ( 1 p ) 2 + σ 2 2 κ l 2 = O p ˜ 0 (1).

We want to approximate the equilibrium W , i.e., the solution of G ¯ ( W )=0, in the regime p ˜ 1. Define Ω= W p ˜ l such that Ω=O(1). We abusively write G ¯ (Ω)= G ¯ ( p ˜ lΩ) such that

G ¯ (Ω)= p ˜ lκΩ+ u m 2 l 2 k , q = 0 + ( p ˜ Ω ) k C k , q ( p ˜ Ω ) q + σ 2 2 l k = 0 + ( p ˜ Ω ) k .

Recalling λ= σ 2 l 2 u m 2 leads to

G ¯ (Ω)= ( u m 2 l 2 + σ 2 2 l ) ( Ω + 1 1 + λ k , q = 0 + ( p ˜ Ω ) k C k , q ( p ˜ Ω ) q + λ 1 + λ k = 0 + ( p ˜ Ω ) k ) .

Now, we write a candidate Ω ( m ) = a = 0 m p ˜ a Ω a , then we chose the terms Ω a =O(1) so that the first m th orders in G ¯ ( Ω ( m ) ) vanish. This implies that G ¯ ( Ω ) G ¯ ( Ω ( m ) )=O( p ˜ m + 1 ), where Ω = W p ˜ l . Then, we use the fact that the minimal absolute value of the eigenvalues of G ¯ is larger than κ( 2 u m 2 l 3 ( 1 p ) 3 + σ 2 2 l 2 ( 1 p ) 2 )>0. Indeed, it means

W W ( m ) < 1 κ ( 2 u m 2 l 3 ( 1 p ) 3 + σ 2 2 l 2 ( 1 p ) 2 ) O ( p m + 1 ) < 1 κ ( 2 u m 2 l 3 + σ 2 2 l 2 ) O ( p m + 1 ) ,

i.e., Ω ( m ) = Ω +O( p ˜ m + 1 ).

Thus, we need to find the Ω a such that the first m th orders in G ¯ ( Ω ( m ) ) vanish. Therefore, we need to expand all the terms in G ¯ (Ω). The first term is obvious. In the following, we write the second term F(Ω) associated to the correlations and look for an explicit expression of the F a such that F(Ω)= a = 0 + p ˜ a F a . Second, we write the third term Q(Ω) associated to the noise and look for an explicit expression of the Q a such that Q(Ω)= a = 0 + p ˜ a Q a .

  • Finding the F a :

First, observe that

Ω q = i = 0 + p ˜ i η N q , k η k = i Ω η 1 Ω η 2 Ω η q .
(33)

This leads to

F ( Ω ) = 1 1 + λ k , q = 0 + i , j = 0 j i + p ˜ i + k + q η N j , n η n = k θ N i j , n θ n = q Ω η 1 Ω η j × C j , i j Ω θ 1 Ω θ i j .

The a th term in the power expansion in p ˜ verifies a=i+k+q. More precisely, this reads

F a = 1 1 + λ k , q , i = 0 a = i + k + q + j = 0 i η N j , n η n = k θ N i j , n θ n = q Ω η 1 Ω η j C j , i j Ω θ 1 Ω θ i j .

This equation is scary but it reduces to simple expressions for small aN.

  • Finding the Q a :

Using equation (33) leads to

Q(Ω)= λ 1 + λ i , q = 0 + p ˜ i + q η N q , k η k = i Ω η 1 Ω η 2 Ω η q .

The a th term in the power expansion in p ˜ verifies a=i+q. More precisely, this reads

Q a = λ 1 + λ q , i = 0 a = i + q + p ˜ i + q η N q , k η k = i Ω η 1 Ω η 2 Ω η q .

Therefore,

a ( 1 + 1 λ ) Q a ( 1 + λ ) F a 0 I d C 0 , 0 1 Ω 0 Ω 0 C 1 , 0 + C 0 , 1 Ω 0 2 Ω 0 2 + Ω 1 Ω 0 2 C 2 , 0 + C 0 , 2 Ω 0 2 + Ω 0 C 1 , 1 Ω 0 + Ω 1 C 1 , 0 + C 0 , 1 Ω 1

Therefore, it is easy to compute Ω a = F a + Q a for aN. By definition W= p ˜ lΩ= p ˜ l(F+Q), which leads to the result. □

B.3 Trace learning with damped oscillators and dynamic synapses

Theorem B.6 If Assumption  3.1 is verified for p]0,1[, then system (16) is asymptotically well posed in probability and the connectivity matrix W ϵ , solution of system (16), converges to W in the sense that for all δ,T>0,

lim ϵ 0 μ P [ sup t [ 0 , T ] | W t ϵ W t | 2 > δ ] =0,

where W is the deterministic solution of

d W i j d t = G ¯ ( W ) i j = κ W i j decay + μ τ 0 τ μ ( v ¯ i g β ) ( s ) ( v ¯ j g β ) ( s ) d s correlation + Q 22 noise ,

where v ¯ (t) is the τ μ -periodic attractor of d v ¯ d t =(WL) v ¯ g β +u(μt), where W R n × n is supposed to be fixed. And Q 22 is a noise matrix described below.

Proof First, let us recall the instantaneous reformulation of (16)

{ d ( v ϵ z ϵ ) = 1 ϵ 1 [ ( 0 W L β β ) ( v ϵ z ϵ ) + ( u ( t ϵ 2 ) 0 ) ] d t + ( σ ϵ 1 d B ( t ) σ z ϵ 1 d B ( t ) ) , d W ϵ d t = κ W ϵ + z ϵ z ϵ .

Starting from this system, the structure of the proof of Theorem 3.1 remains unchanged. The correlation term is to be replaced by μ τ v ¯ G β G β v ¯ . The noise term we are looking for is Q 22 in the Lyapunov equation (see (9)) below

( 0 W L β β ) ( Q 11 Q 12 Q 12 Q 22 ) + ( Q 11 Q 12 Q 12 Q 22 ) ( 0 β W L β ) + ( σ 2 0 0 σ z 2 ) =0.

Because the learning rule is symmetric, then the space of symmetric matrices is invariant and we can restrict this section to the symmetric case. It is easy to show that this Lyapunov equation has a unique solution, because the sum of two eigenvalues of the drift matrix is never null (provided W stays in E p ). This leads to the system

{ ( W L ) Q 12 + Q 12 ( W L ) + σ 2 = 0 , ( a ) β ( Q 11 Q 12 ) + Q 22 ( W L ) = 0 , ( b ) Q 22 = Q 12 + Q 12 2 + σ z 2 2 β I d . ( c )

One solution of equation (a) is Q 12 = σ 2 2 ( L W ) 1 . Equation (c) defines Q 22 properly. Indeed, because W is symmetric, so is Q 12 and Q 22 = σ 2 2 ( L W ) 1 + σ z 2 2 β I d . Similarly, equation (b) defines Q 11 but it remains to be checked that this definition is that of a symmetric matrix. In fact, it works because W is assumed symmetric and the noise has no off-diagonal terms. Indeed, in this case, Q 11 = σ 2 2 ( L W ) 1 + σ 2 2 β + σ z 2 2 β 2 (LW). This solution is thus the unique solution of the Lyapunov equation.

Therefore,

G ¯ (W)=κW+ μ τ v ¯ G β G β v ¯ + σ 2 2 ( L W ) 1 + σ z 2 2 β I d .

Thus, this application of Theorem 2.2 to the instantaneous system with σ z 0, leads to the previous averaged equation. To recover the initial case (16), we can let σ z 0. We see that the function G ¯ tends to

G ¯ (W)=κW+ μ τ v ¯ G β G β v ¯ + σ 2 2 ( L W ) 1 ,

which we will rewrite G ¯ for simplicity in the following. Thus, this definition of G ¯ defines the averaged system for the original equation (16).

In the derivation of the condition under which W remain smaller than lp, the upper bound of the term A changes as follows. Define M R + so that v ¯ (t)M for all t>0. Because we assume v ¯ ( R )=0, the variation of parameters formula for linear retarded differential equations with constant coefficients (see Chapter 6 of [34]) reads v ¯ (t)= 0 t U(ts)u(μs)ds where the resolvent U is the solution of U ˙ =(WL)(Ug). We use Corollary 1.1 of Chapter 6 of [34], which is based on Grönwall’s lemma, to claim that U(ts) e ( t s ) ( α l ) . Therefore,

v ¯ ( t ) t U ( t s ) u ( μ s ) ds u m [ e ( t s ) ( α l ) l α ] t u m l α =M.

Then, we used Young’s inequality for convolution to get ( v ¯ g ) ( t ) 2 v ¯ 2 g 1 = v ¯ 2 .

Therefore, the upper bound of A remains unchanged.

Therefore, the polynomial P remains the same and Assumption 3.1 is still relevant to this problem. □

Lemma B.7 If v ¯ is the solution, with zero as initial condition, of d v ¯ d t =(WL) v ¯ g β +u(t), it can be written by the sum below which converges if W is in E p for p]0,1[.

v ¯ = k = 0 + W k l k + 1 u W ˜ V ˜ k ,

where W ˜ and V ˜ are convolution operators respectively generated by the functions w ˜ and v ˜ detailed below

w ˜ : t l 2 Δ ( ( 1 + Δ ) e β 2 ( 1 Δ ) t ( 1 Δ ) e β 2 ( 1 + Δ ) t ) H ( t ) , v ˜ : t l Δ ( e β 2 ( 1 Δ ) t e β 2 ( 1 + Δ ) t ) H ( t ) ,

where H is the Heaviside function, Δ= 1 4 l β . If Δ is a pure imaginary number, the expression above still holds with the hyperbolic functions sh and ch being turned into classical trigonometric functions sin and cos and Δ being replaced by its modulus.

If W is in E p for p]0,1[, then this expansion converges.

Proof See the second example of [25]. □

Using Lemma C.3, on can redefine

C ˜ k , q = 1 u m 2 τ v 1 k + q + 2 u V k + 1 ( u V q + 1 ) ,

where V is the convolution operator generated by v(t)= l μ Δ ( e β 2 μ ( 1 Δ ) t e β 2 μ ( 1 + Δ ) t )H(t) (see Appendix C for details). Observe that applying Young’s inequality for convolutions leads to C ˜ k , q 2 1.

Therefore, we can rewrite Theorem 3.3 into

Theorem B.8

μ τ v ¯ G β G β v ¯ = u m 2 v 1 2 l 2 k , q = 0 + W k ( l / v 1 ) k C ˜ k , q W q ( l / v 1 ) q .

Proof Similar to that of Theorem 3.3. □

Theorem B.9 If Assumption  3.1 is verified for p 1 2 v 1 3 + 1 , there is a unique equilibrium point which is globally, asymptotically stable.

Proof Similar to the proof of Theorem B.4. □

With the same definitions for p ˜ = u m 2 κ l 3 + σ 2 2 κ l 2 and λ= σ 2 l 2 u m 2 , we can show

Theorem B.10 The weakly connected expansion of the equilibrium point is

W = p ˜ l 1 + λ ( λ + v 1 2 C ˜ 0 , 0 ) + p ˜ 2 v 1 l ( 1 + λ ) 2 ( λ 2 v 1 + λ ( v 1 C ˜ 0 , 0 + v 1 2 C ˜ 1 , 0 + v 1 2 C ˜ 0 , 1 ) + v 1 4 C ˜ 0 , 0 C ˜ 1 , 0 + v 1 4 C ˜ 0 , 1 C ˜ 0 , 0 ) + O ( p ˜ 3 v 1 2 ) .

Proof Define Ω= W p ˜ l so that

G ¯ ( Ω ) = ( u m 2 l 2 + σ 2 2 l ) ( Ω + v 1 2 1 + λ k , q = 0 + ( p ˜ v 1 Ω ) k C ˜ k , q ( p ˜ v 1 Ω ) q + λ 1 + λ k = 0 + ( p ˜ Ω ) k ) .

So, the expansion will be in orders of p ˜ v 1 with v 1 1.

Therefore,

a ( 1 + 1 λ ) Q a 1 + λ v 1 2 F a 0 I d C ˜ 0 , 0 1 Ω 0 v 1 Ω 0 C ˜ 1 , 0 + C ˜ 0 , 1 Ω 0 2 Ω 0 2 + Ω 1 v 1 2 Ω 0 2 C ˜ 2 , 0 + C ˜ 0 , 2 Ω 0 2 + Ω 0 C ˜ 1 , 1 Ω 0 + Ω 1 C ˜ 1 , 0 + C ˜ 0 , 1 Ω 1

Actually, it is possible to compute recursively the n th terms, although their complexity explodes. Therefore, it is easy to compute Ω a = F a + Q a for aN. By definition W= p ˜ lΩ= p ˜ l(F+Q), which leads to the result. □

B.4 STDP learning with linear neurons and correlated noise

Consider the following n-dimensional stochastic differential system:

{ d v ϵ = 1 ϵ 1 ( L v ϵ + W v ϵ + u ( t ϵ 2 ) ) d t + 1 ϵ 1 Σ d B ( t ) , d W ϵ d t = G ( v ϵ , W ϵ ) = κ W ϵ + a + v ϵ ( v ϵ g γ ) a ( v ϵ g γ ) v ϵ ,

where u is a continuous input in R n , l, ϵ 1 , ϵ 2 ,κ R + , a + , a R, Σ R n × n and B(t) is n-dimensional Brownian noise, and for all γ>0, g γ :tγ e γ t H(t) where H is the Heaviside function. Recall the well-posedness Assumption 3.2

Assumption B.1 There exists p]0,1[ such that

| a + | + | a | p ( 1 p ) ( s 2 γ 2 ( 1 + γ / l p ) + u m 2 ( 1 p ) ) <κ l 3 .

Theorem B.11 If Assumption  3.2 is verified for p]0,1[, then system (17) is asymptotically well posed in probability and the connectivity matrix W ϵ , the solution of system (17), converges to W in the sense that for all δ,T>0,

lim ϵ 0 μ P [ sup t [ 0 , T ] | W t ϵ W t | 2 > δ ] =0,

where W is the deterministic solution of

d W i j d t = G ¯ ( W ) i j = κ W i j decay + μ τ 0 τ μ a + v ¯ i ( s ) ( v ¯ j g γ ) ( s ) a ( v ¯ i g γ ) ( s ) v ¯ j ( s ) d s correlation + Q 12 noise ,

where v ¯ (t) is the τ μ -periodic attractor of d v ¯ d t =(WL) v ¯ +u(μt), where W R n × n is supposed to be fixed. And Q 12 is described below.

Proof We recall the instantaneous reformulation of the original system (17)

{ d ( v ϵ z ϵ ) = 1 ϵ 1 [ ( W L 0 γ γ ) ( v ϵ z ϵ ) + ( u ( t ϵ 2 ) 0 ) ] d t + ( σ ϵ 1 d B ( t ) σ z ϵ 1 d B ( t ) ) , d W ϵ d t = κ W ϵ + a + v ϵ z ϵ a z ϵ v ϵ .

With this linear expression, the structure of the proof of Theorem 3.1 remains unchanged. The correlation term is to be replaced by μ τ ( a + v ¯ G γ v ¯ + a v ¯ G γ v ¯ ). The noise term we are looking for is Q 12 in the Lyapunov equation (see (9)) below

( W L 0 γ γ ) ( Q 11 Q 12 Q 12 Q 22 ) + ( Q 11 Q 12 Q 12 Q 22 ) ( W L γ 0 γ ) + ( Σ Σ 0 0 σ z 2 2 I d ) = 0 .

This leads to the system

{ ( W L ) Q 11 + Q 11 ( W L ) + Σ Σ = 0 , ( a ) γ ( Q 11 Q 12 ) + Q 12 ( W L ) = 0 , ( b ) Q 22 = Q 12 + Q 12 2 + σ z 2 2 I d . ( c )
(34)

The matrix Q 11 is the solution of a Lyapunov equation (see equation (a)). Lemma D.1 gives an explicit solution: Q 11 = k = 0 + W k Σ Σ ( 2 L W ) ( k + 1 ) . Equation (b) leads to

Q 12 =γ Q 11 ( L + γ W ) 1 =γ k = 0 + W k Σ Σ ( 2 L W ) ( k + 1 ) ( L + γ W ) 1 .

We see that it does not depend on σ z , which, once Theorem 2.2 is applied, can be considered null so that the average system G ¯ corresponds to the original system (17).

Therefore,

G ¯ (W)=κW+ μ τ ( a + v ¯ G γ v ¯ a v ¯ G γ v ¯ ) + a + Q 12 a Q 12 .
(35)

We show that for W already in E p , it will stay forever in E p :

  1. 1.

    Inequality W0:

Decomposing the connectivity as W=S+iA leads to X,WX=X,SX+iX,AX. By hermiticity of S and A, X,SX and X,AX are real numbers. This means we only have to show that the eigenvalues of S remain positive along the dynamics. Taking the symmetric part of equation (35) leads to

d S d t =κS+ μ ( a + a ) 2 τ v ¯ ( G γ + G γ ) v ¯ +( a + a ) Q 22 .

Suppose we take an initial condition S 0 >0. It is clear that if v ¯ (G+ G ) v ¯ and Q 22 are always positive, then S will remain positive. This would prove the result. Therefore, focus on

  • Proving v ¯ ( G γ + G γ ) v ¯ 0:

According to the first point of Lemma C.1, G γ + G γ =2 G γ G γ . Therefore, v ¯ ( G γ + G γ ) v ¯ =2 v ¯ G γ ( v ¯ G γ ) is a Gramian matrix and therefore it is positive.

  • Proving Q 22 0:

Q 22

is the covariance matrix of the random value z, therefore, it is positive-semi-definite.

  1. 2.

    Inequality W<lp:

For all x C n such that x=1, define a family of positive numbers ( α x ) whose supremum is written α and a family of functions ( g x ) such that

g x :Jx,Jx α x .

Because g is linear, d g W x (J)=x,Jx. For W g x 1 (0), i.e., x,Wx= α x , compute

d g W x ( G μ ( W ) ) = κ x , W x = α x + μ τ x , v ¯ ( a + G γ a G γ ) v ¯ x = A + ( | a + | + | a | ) x , Q 12 x = B .
  • Upper bound of A:

Cauchy-Schwarz leads to

|A|| a + | v ¯ G γ v ¯ x +| a | v ¯ G γ v ¯ x .

As before, we can use Young’s inequality for convolutions to find an upper bound of A which reads

A τ u m 2 ( | a + | + | a | ) ( l α ) 2 .
  • Upper bound of B:

According to Proposition 11.9.3 of [35] the solution of the Lyapunov equation (a) in system (34) can be rewritten

Q 11 = 0 + e t ( L W ) Σ Σ t e t ( L W ) dt,

because (WL)(WL) is not singular due to the fact W E p .

Observe that for A a positive matrix whose eigenvalues are the λ i , then the spectrum of e A is { e λ i :i=1,,n}. Therefore, e A = e min ( | λ i | ) . Therefore, if A=LW, then e A e α l . This leads to

Q 11 s 2 0 + e 2 ( α l ) t dt= s 2 [ e 2 ( α l ) t 2 ( α l ) ] 0 + = s 2 2 ( l α ) .

Then we apply the same arguments to say that

| B | = Q 12 Q 11 γ ( L + γ W ) 1 s 2 γ 2 ( l α ) ( l + γ α ) .

The rest of the proof is identical to the Hebbian case. Assumption 3.1 is changed to Assumption 3.2 for E p to be invariant by the flow G ¯ . □

Define

D k , q = 1 u m 2 τ ( | a + | + | a | ) u G l / μ k + 1 ( a + G γ a G γ ) G l / μ k + 1 u ,

such that D k , q 2 1.

In this framework, one can prove

Theorem B.12 The correlation term can be written

μ τ ( a + v ¯ G γ v ¯ a v ¯ G γ v ¯ ) = u m 2 ( | a + | + | a | ) l 2 k , q = 0 + W k l k D k , q W q l q .

Proof Similar to that of Theorem 3.3. □

Theorem B.13 If Assumption  3.2 is verified for p 1 3 , there is a unique equilibrium point which is globally, asymptotically stable.

Proof Similar to the previous case. □

Now, we proceed as before by defining

p ˜ = | a + | + | a | κ l 3 ( s 2 2 ( 1 l + 1 γ ) + u m 2 ) andλ= s 2 2 u m 2 ( 1 l + 1 γ ) .

Theorem B.14

W = p ˜ l 1 + λ ( λ ( α + α ) Σ Σ d + D 0 , 0 ) + p ˜ 2 l ( 1 + λ ) 2 ( λ 2 ( a + a ) 2 ( 1 + 1 1 + γ / l ) Σ Σ 2 d 2 + λ ( a + a 2 ) ( ( D 0 , 0 + 2 D 0 , 1 ) Σ Σ d + Σ Σ d ( D 0 , 0 + 2 D 1 , 0 ) ) + λ 1 + γ / l ( a + D 0 , 0 Σ Σ d a Σ Σ d D 0 , 0 ) + D 0 , 0 D 1 , 0 + D 1 , 0 D 0 , 0 ) + O ( p ˜ 3 ) .

Proof First, we need to work on the noise term Q= a + Q 12 + a Q 12 . Recall Q 11 is the solution of the Lyapunov equation (LW) Q 11 + Q 11 ( L W ) +Σ Σ =0. Lemma D.1 says that

Q 11 = k = 0 + W k Σ Σ ( 2 L W ) ( k + 1 )

is a well-defined solution. We now use the fact that ( 2 L W ) ( k + 1 ) = 1 ( 2 l ) k + 1 n = 0 + ( n + k n ) W n ( 2 l ) n to show that

Q 11 = k , n = 0 + 1 ( 2 l ) k + n + 1 ( n + k n ) W k Σ Σ W n

and therefore

Q 12 = γ 2 l ( l + γ ) k , n , q = 0 + 1 2 k + n ( 1 + γ / l ) q ( n + k n ) W k l k Σ Σ W n + q l n + q .

Thus, writing α ± = a ± | a + | + | a | and c k , n , q = 1 2 k + n ( 1 + γ / l ) q ( n + k n ) , the noise term is

Q = d ( | a + | + | a | ) 2 l 2 ( 1 l + 1 γ ) k , n , q = 0 + c k , n , q ( α + W n + q l n + q Σ Σ d W k l k α W k l k Σ Σ d W n + q l n + q ) .

Define Ω= W p ˜ l such that Ω=O(1). We improperly write G ¯ (Ω)= G ¯ ( p ˜ lΩ) such that

G ¯ ( Ω ) = p ˜ l κ Ω + u m 2 ( | a + | + | a | ) l 2 k , q = 0 + ( p ˜ Ω ) k D k , q ( p ˜ Ω ) q + d ( | a + | + | a | ) 2 l 2 ( 1 l + 1 γ ) k , n , q = 0 + c k , n , q ( α + ( p ˜ Ω ) n + q Σ Σ d ( p ˜ Ω ) k α ( p ˜ Ω ) k Σ Σ d ( p ˜ Ω ) n + q ) .

This leads to

G ¯ ( Ω ) = ( u m 2 ( | a + | + | a | ) l 2 + d ( | a + | + | a | ) 2 l 2 ( 1 l + 1 γ ) ) × [ Ω + 1 1 + λ k , q = 0 + ( p ˜ Ω ) k D k , q ( p ˜ Ω ) q F ˜ + λ 1 + λ × k , n , q = 0 + c k , n , q ( α + ( p ˜ Ω ) n + q Σ Σ d ( p ˜ Ω ) k α ( p ˜ Ω ) k Σ Σ d ( p ˜ Ω ) n + q ) Q ˜ ] .

We are looking for F a and Q a in the expansions F ˜ = a = 0 + F a p ˜ a and Q ˜ = a = 0 + Q a p ˜ a . Recall

Ω p = i = 0 + p ˜ i η N p , k η k = i Ω η 1 Ω η 2 Ω η p .

Therefore,

Q ˜ = k , n , q , i , j = 0 + c k , n , q p ˜ k + n + q + i + j η N k , m η m = i θ N n + q , m θ m = j α + Ω η 1 Ω η n + q × Σ Σ d Ω θ 1 Ω θ k α Ω η 1 Ω η k Σ Σ d Ω θ 1 Ω θ n + q .

Leading to

Q a = k , n , i , j = 0 a = k + n + q + i + j + c k , n , q p ˜ k + n + q + i + j η N k , m η m = i θ N n + q , m θ m = j α + Ω η 1 Ω η n + q × Σ Σ d Ω θ 1 Ω θ k α Ω η 1 Ω η k Σ Σ d Ω θ 1 Ω θ n + q .

This equation is scary but it reduces to simple expressions for small aN.

a Q a F a 0 ( α + α ) Σ Σ d D 0 , 0 1 α + α 2 ( Ω 0 Σ Σ d + Σ Σ d Ω 0 ) + 1 1 + γ / l ( α + Ω 0 Σ Σ d α Σ Σ d Ω 0 ) Ω 0 D 1 , 0 + D 0 , 1 Ω 0

Recall that W= p ˜ lΩ= p ˜ l( 1 1 + λ F ˜ + λ 1 + λ Q ˜ ) to get the result. □

Appendix C: Properties of the convolution operators G γ , W, and V

Recall G γ , W, and V are convolution operators respectively generated by g γ , v, and w defined in (30). Their Fourier transforms are respectively

g ˆ γ : ξ γ γ + 2 i π ξ , v ˆ : ξ 4 β ( β ( 1 + Δ ) + 4 i π μ ξ ) ( β ( 1 Δ ) + 4 i π μ ξ ) , w ˆ : ξ 4 β + 8 i π μ ξ ( β ( 1 + Δ ) + 4 i π μ ξ ) ( β ( 1 Δ ) + 4 i π μ ξ ) .

C.1 Algebraic properties

Lemma C.1

G γ + G γ 2 = G γ G γ .

Proof Compute

( G γ G γ ) x y = γ 2 + e γ ( x z ) H ( x z ) e γ ( y z ) H ( y z ) d z = γ 2 e γ ( x + y ) min ( x , y ) e 2 γ z d z = γ 2 e γ ( y + x ) [ e 2 γ z 2 γ ] min ( x , y ) = γ 2 e γ ( y + x 2 min ( x , y ) ) .

Therefore, if yx, then ( G γ G γ ) x y = γ 2 e γ ( y x ) , and if xy, then ( G γ G γ ) x y = γ 2 e γ ( x y ) . The result follows. □

Lemma C.2

G γ G γ = 1 γ D ( G γ + G γ ) ,

where D is the time-differentiation operator, i.e., (XD)(t)= d X d t (t).

Proof G γ and G γ are two convolution operators respectively generated by g γ :tγ e γ t H(t) and g γ :tγ e γ t H(t). The Fourier transform of g γ is h ˆ (ξ)= γ γ + 2 i π ξ . Therefore, the Fourier transform of g γ g γ is

g γ g γ ˆ ( ξ ) = γ γ 2 i π ξ γ γ + 2 i π ξ = 2 i π ξ γ 2 γ 2 γ 2 + 4 π 2 ξ 2 = 2 i π ξ γ ( γ γ + 2 i π ξ + γ γ 2 i π ξ ) = 2 i π ξ γ ( g γ + g γ ˆ ( ξ ) ) .

Because d f d t ˆ (ξ)=2iπξ f ˆ , taking the inverse Fourier transform of g γ g γ ˆ (ξ) gives the result. □

Lemma C.3

W V k G β / μ = V k + 1 .

Besides, if ΔiR, V k is a convolution operator generated by

v k :t π β k ! e β 2 t ( t | Δ | ) k + 1 2 J k + 1 2 ( β | Δ | 2 t ) H(t),

where J n (z) is the Bessel function of the first kind. If ΔR, the formula above holds if one replaces J n (z) by I n (z), the modified Bessel function of the first kind.

Proof We want to compute W V k G β / μ . Compute the Fourier transform of w v k g β / μ , where v k is the result of k convolutions of v with itself

w v k g β μ ˆ ( ξ ) = w ˆ ( ξ ) g ˆ β μ ( ξ ) v k ˆ ( ξ ) = ( β ( β ( 1 + Δ ) 2 + 2 i π μ ξ ) ( β ( 1 Δ ) 2 + 2 i π μ ξ ) ) k + 1 = v ˆ k + 1 ( ξ ) .

This proves the first result.

Then observe that

v k + 1 ( t ) = β k + 1 F 1 ( ξ 1 ( β ( 1 + Δ ) 2 + 2 i π μ ξ ) k + 1 ) F 1 ( ξ 1 ( β ( 1 Δ ) 2 + 2 i π μ ξ ) k + 1 ) ( t ) = β k + 1 ( s s k k ! e β ( 1 + Δ ) 2 s H ( s ) ) ( s s k k ! e β ( 1 Δ ) 2 s H ( s ) ) ( t ) = β k + 1 k ! 2 e β ( 1 Δ ) 2 t 0 t s k ( t s ) k e β Δ s d s H ( t ) .

The last integral can be analytically computed with the help of Bessel functions. In fact, it gives different results depending on the nature of Δ (whether it is real or imaginary).

  • If ΔR, then defining I n (z), the modified Bessel function of the first kind, leads to

    0 t e β Δ s s k ( t s ) k ds= π e β Δ 2 t k! ( t β Δ ) k + 1 2 I k + 1 2 ( β Δ 2 t ) .
  • If ΔiR, then defining J n (z), the Bessel function of the first kind, leads to

    0 t e β Δ s s k ( t s ) k ds= π e β Δ 2 t k! ( t β | Δ | ) k + 1 2 k + 1 2 ( β | Δ | 2 t ) .

This concludes the proof. □

C.2 Signed integral

  1. 1.
    + g γ (t)dt=γ 0 1 γ =1

    .

  2. 2.

    For Δ= 1 4 l β C, compute

    + v ( t ) d t = l Δ μ ( 0 + e β 2 μ ( 1 Δ ) t d t 0 + e β 2 μ ( 1 Δ ) t d t ) = l Δ μ ( 0 1 β 2 μ ( 1 Δ ) 0 1 β 2 μ ( 1 + Δ ) ) = 2 l Δ β 1 + Δ ( 1 Δ ) 1 Δ 2 = 4 l β β 4 l = 1 .
  3. 3.

    Similarly,

    + w ( t ) d t = l 2 Δ μ ( ( 1 + Δ ) 0 + e β 2 μ ( 1 Δ ) t d t ( 1 Δ ) 0 + e β 2 μ ( 1 Δ ) t d t ) = l Δ β ( 1 + Δ ) 2 ( 1 Δ ) 2 1 Δ 2 = l Δ β 4 Δ β 4 l = 1 .

C.3 L1 norm

  • For 4lβ, i.e., Δ= 1 4 l β R + , then

  1. 1.
    g γ (t)>0

    and g γ 1 = R g γ (t)dt=1.

  2. 2.
    v(t)= 2 l Δ μ e β 2 μ t sh( β Δ 2 μ t)H(t)0

    and v 1 = R v(t)dt=1.

  3. 3.
    w(t)= l Δ μ e β 2 μ t (sh( β Δ 2 μ t)+Δch( β Δ 2 μ t))H(t)0

    and u 1 = R u(t)dt=1.

  • For 4l>β, i.e., Δ is a pure imaginary, we rewrite Δ= 4 l β 1 and observe that

  1. 1.
    g γ (t)>0

    and g γ 1 = R g γ (t)dt=1.

  2. 2.
    v(t)= 2 l Δ μ e β 2 μ t sin( β Δ 2 μ t)H(t)

    which changes sign on R + . Therefore,

    v 1 = 2 l Δ μ 0 + e β 2 μ t | sin ( β Δ 2 μ t ) | d t = 4 l Δ 2 β 0 + e s Δ | sin ( s ) | d s = 4 l Δ 2 β k = 0 + ( 1 ) k k π ( k + 1 ) π e s Δ sin ( s ) d s = 4 l Δ 2 β k = 0 + ( 1 ) k ( 1 ) k e k π Δ ( 1 ) k + 1 e ( k + 1 ) π Δ 1 + 1 Δ 2 = ( 1 + e π Δ ) k = 0 + e k π Δ = 1 + e π Δ 1 e π Δ = coth ( π 2 Δ ) .
  3. 3.
    w(t)= l Δ μ e β 2 μ t (sin( β Δ 2 μ t)+Δcos( β Δ 2 μ t))H(t)

    which also changes sign on R + . We have not found a way to compute w 1 and write the result elegantly.

Appendix D: Solution of a Lyapunov equation

Lemma D.1 The solution of the following Lyapunov equation

(LW)X+X ( L W ) +D=0,

where L=lId is

X= k = 0 + W k D ( 2 L W ) ( k + 1 ) .
(36)

Proof First, observe that if {|λ|:λ eigenvalue of W}]0,l[ and W>0, then W ( 2 L W ) 1 <1. Therefore, X is well defined by equation (36).

Observe that ( 2 L W ) 1 ( L W ) = I d L ( 2 L W ) 1 . Assuming X is defined by equation (36), then based on the fact L commutes with any matrix (because it is a scalar matrix),

( L W ) X + X ( L W ) = ( L k = 0 + W k D ( 2 L W ) ( k + 1 ) W k = 0 + W k D ( 2 L W ) ( k + 1 ) + k = 0 + W k D ( 2 L W ) k L k = 0 + W k D ( 2 L W ) ( k + 1 ) ) = ( k = 0 + W k D ( 2 L W ) k k = 0 + W k + 1 D ( 2 L W ) ( k + 1 ) ) = D .

 □

References

  1. Kandel E, Schwartz J, Jessell T, et al. 4. In Principles of Neural Science. McGraw-Hill, New York; 2000.

    Google Scholar 

  2. Khas’minskii R: The averaging principle for stochastic differential equations. Probl Pereda Inf 1968,4(2):86–87.

    Google Scholar 

  3. Pavliotis G, Stuart A 53. In Multiscale Methods: Averaging and Homogenization. Springer, Berlin; 2008.

    Google Scholar 

  4. Arnold V, Levi M 250. In Geometrical Methods in the Theory of Ordinary Differential Equations. Springer, Berlin; 1988.

    Google Scholar 

  5. Lorenzi L, Lunardi A, Zamboni A: Asymptotic behavior in time periodic parabolic problems with unbounded coefficients. J Differ Equ 2010,249(12):3377–3418.

    Article  MathSciNet  MATH  Google Scholar 

  6. Wainrib G: (2012). Double averaging principle for periodically forced slow-fast stochastic systems. Electron Commun Probab. In revision. Wainrib G: (2012). Double averaging principle for periodically forced slow-fast stochastic systems. Electron Commun Probab. In revision.

  7. Ermentrout G, Terman D 35. In Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.

    Chapter  Google Scholar 

  8. Izhikevich E: Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge; 2007.

    Google Scholar 

  9. Hebb D: The Organisation of Behaviour. 1949.

    Google Scholar 

  10. Dayan P, Abbott L: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. MIT Press, Cambridge; 2001.

    Google Scholar 

  11. Gerstner W, Kistler W: Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge; 2002.

    Book  Google Scholar 

  12. Caporale N, Dan Y: Spike timing-dependent plasticity: a Hebbian learning rule. Annu Rev Neurosci 2008, 31: 25–46.

    Article  Google Scholar 

  13. Oja E: Simplified neuron model as a principal component analyzer. J Math Biol 1982,15(3):267–273.

    Article  MathSciNet  MATH  Google Scholar 

  14. Miller K, MacKay D: The role of constraints in Hebbian learning. Neural Comput 1994,6(1):100–126.

    Article  Google Scholar 

  15. Rolls E, Deco G: Computational Neuroscience of Vision. Oxford University Press, London; 2002. chapter 8 chapter 8

    Google Scholar 

  16. Hopfield JJ: Hopfield network. Scholarpedia 2007.,2(5): Article ID 1977 Article ID 1977

  17. Gerstner W, Kistler W: Mathematical formulations of Hebbian learning. Biol Cybern 2002,87(5):404–415.

    Article  MATH  Google Scholar 

  18. Morrison A, Diesmann M, Gerstner W: Phenomenological models of synaptic plasticity based on spike timing. Biol Cybern 2008,98(6):459–478.

    Article  MathSciNet  MATH  Google Scholar 

  19. Burkitt A, Gilson M, van Hemmen J: Spike-timing-dependent plasticity for neurons with recurrent connections. Biol Cybern 2007,96(5):533–546.

    Article  MathSciNet  MATH  Google Scholar 

  20. Gilson M, Burkitt A, Grayden D, Thomas D, van Hemmen J: Emergence of network structure due to spike-timing-dependent plasticity in recurrent neuronal networks iv. Biol Cybern 2009,101(5):427–444.

    Article  MathSciNet  Google Scholar 

  21. Fenichel N: Geometric singular perturbation theory for ordinary differential equations. J Differ Equ 1979,31(1):53–98.

    Article  MathSciNet  MATH  Google Scholar 

  22. O’Malley R: Singular Perturbation Methods for Ordinary Differential Equations. Springer, New York; 1991.

    Book  MATH  Google Scholar 

  23. Kifer Y 944. In Large Deviations and Adiabatic Transitions for Dynamical Systems and Markov Processes in Fully Coupled Averaging. Am. Math. Soc., Providence; 2009.

    Google Scholar 

  24. Risken H 18. In The Fokker-Planck Equation: Methods of Solution and Applications. Springer, Berlin; 1996.

    Chapter  Google Scholar 

  25. Galtier M, Touboul J: On an explicit representation of the solution of linear stochastic partial differential equations with delays. C R Math 2012, 350: 167–172.

    Article  MathSciNet  MATH  Google Scholar 

  26. Hawkes A: Point spectra of some mutually exciting point processes. J R Stat Soc, Ser B, Methodol 1971, 33: 438–443.

    MathSciNet  MATH  Google Scholar 

  27. Földiák P: Learning invariance from transformation sequences. Neural Comput 1991,3(2):194–200.

    Article  Google Scholar 

  28. Bi G, Poo M: Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J Neurosci 1998,18(24):10464.

    Google Scholar 

  29. Xie X, Seung H: Spike-based learning rules and stabilization of persistent neural activity. Adv Neural Inf Process Syst 2000, 12: 199–205.

    Google Scholar 

  30. Izhikevich E: Resonate-and-fire neurons. Neural Netw 2001,14(6–7):883–894.

    Article  Google Scholar 

  31. Romani G, Williamson S, Kaufman L: Tonotopic organization of the human auditory cortex. Science 1982,216(4552):1339.

    Article  Google Scholar 

  32. Abbott L, Nelson S, et al.: Synaptic plasticity: taming the beast. Nat Neurosci 2000, 3: 1178–1183.

    Article  Google Scholar 

  33. Higham DJ, Mao X, Stuart AM: Strong convergence of Euler-type methods for nonlinear stochastic differential equations. SIAM J Numer Anal 2002,40(3):1041–1063.

    Article  MathSciNet  MATH  Google Scholar 

  34. Hale J, Lunel S: Introduction to Functional Differential Equations. Springer, Berlin; 1993.

    Book  MATH  Google Scholar 

  35. Bernstein D: Matrix Mathematics: Theory, Facts, and Formulas. Princeton University Press, Princeton; 2009.

    Book  Google Scholar 

Download references

Acknowledgements

MG thanks Olivier Faugeras for his support. MG was partially funded by the ERC advanced grant NerVi nb227747, by the IP project BrainScaleS #269921 and by the région PACA, France. GW thanks L. Ryzhik from Stanford University, Department of Mathematics for his hospitality during 2010-2011 where part of this work was achieved.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mathieu Galtier.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

GW developed the theory of temporal averaging presented in this paper. MG applied this theory to learning neural networks and did the numerical simulations. Both authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Galtier, M., Wainrib, G. Multiscale analysis of slow-fast neuronal learning models with noise. J. Math. Neurosc. 2, 13 (2012). https://doi.org/10.1186/2190-8567-2-13

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-2-13

Keywords