### Abstract

This paper deals with the application of temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning. Three time-scales arise for these models: (i) the fast neuronal dynamics, (ii) the intermediate external input to the system, and (iii) the slow learning mechanisms. Based on this time-scale separation, we apply an extension of the mathematical theory of stochastic averaging with periodic forcing in order to derive a reduced deterministic model for the connectivity dynamics. We focus on a class of models where the activity is linear to understand the specificity of several learning rules (Hebbian, trace or anti-symmetric learning). In a weakly connected regime, we study the equilibrium connectivity which gathers the entire ‘knowledge’ of the network about the inputs. We develop an asymptotic method to approximate this equilibrium. We show that the symmetric part of the connectivity post-learning encodes the correlation structure of the inputs, whereas the anti-symmetric part corresponds to the cross correlation between the inputs and their time derivative. Moreover, the time-scales ratio appears as an important parameter revealing temporal correlations.

##### Keywords:

slow-fast systems; stochastic differential equations; inhomogeneous Markov process; averaging; model reduction; recurrent networks; unsupervised learning; Hebbian learning; STDP### 1 Introduction

Complex systems are made of a large number of interacting elements leading to non-trivial behaviors. They arise in various areas of research such as biology, social sciences, physics or communication networks. In particular in neuroscience, the nervous system is composed of billions of interconnected neurons interacting with their environment. Two specific features of this class of complex systems are that (i) external inputs and (ii) internal sources of random fluctuations influence their dynamics. Their theoretical understanding is a great challenge and involves high-dimensional non-linear mathematical models integrating non-autonomous and stochastic perturbations.

Modeling these systems gives rise to many different scales both in space and in time.
In particular, learning processes in the brain involve three time-scales: from neuronal
activity (fast), external stimulation (intermediate) to synaptic plasticity (slow).
Here, fast time-scale corresponds to a few milliseconds and slow time-scale to minutes/hour,
and intermediate time-scale generally ranges between fast and slow scales, although
some stimuli may be faster than neuronal activity time-scale (*e.g.*, submilliseconds auditory signals [1]). The separation of these time-scales is an important and useful property in their
study. Indeed, multiscale methods appear particularly relevant to handle and simplify
such complex systems.

First, stochastic averaging principle [2,3] is a powerful tool to analyze the impact of noise on slow-fast dynamical systems. This method relies on approximating the fast dynamics by its quasi-stationary measure and averaging the slow evolution with respect to this measure. In the asymptotic regime of perfect time-scale separation, this leads to a slow reduced system whose analysis enables a better understanding of the original stochastic model.

Second, periodic averaging theory [4], which has been originally developed for celestial mechanics, is particularly relevant to study the effect of fast deterministic and periodic perturbations (external input) on dynamical systems. This method also leads to a reduced model where the external perturbation is time-averaged.

It seems appropriate to gather these two methods to address our case of a noisy and
input-driven slow-fast dynamical system. This combined approach provides a novel way
to understand the interactions between the three time-scales relevant in our models.
More precisely, we will consider the following class of multiscale stochastic differential
equations (SDEs), with

where
*t*. Random perturbations are included in the form of a diffusion term, and

We are interested in the double limit
**w** in the asymptotic regime where both the variable **v** and the external input are much faster than **w**. This asymptotic regime corresponds to the study of a neuronal network in which both
the external input **u** and the neuronal activity **v** operate on a faster time-scale than the slow plasticity-driven evolution of synaptic
weights **W**. To account for the possible difference of time-scales between **v** and the input, we introduce the time-scale ratio

Recently, in an important contribution [5], a precise understanding of the long-time behavior of such processes has been obtained
using methods from partial differential equations. In particular, conditions ensuring
the existence of a periodic family of probability measures to which the law of **v** converges as time grows have been identified, together with a sharp estimation of
the speed of mixing. These results are at the heart of the extension of the classical
stochastic averaging principle [2] to the case of periodically forced slow-fast SDEs [6]. As a result, we obtain a reduced equation describing the slow evolution of variable
**w** in the form of an ordinary differential equation,

where
*G* with respect to a specific probability measure, as explained in Section 2.

This paper first introduces the appropriate mathematical framework and then focuses on applying these multiscale methods to learning neural networks.

The individual elements of these networks are neurons or populations of neurons.
A common assumption at the basis of mathematical neuroscience [7] is to model their behavior by a stochastic differential equation which is made of
four different contributions: (i) an intrinsic dynamics term, (ii) a communication
term, (iii) a term for the external input, and (iv) a stochastic term for the intrinsic
variability. Assuming that their activity is represented by the fast variable
*F* corresponds to the first three terms contributing to the dynamics). In the literature,
the level of non-linearity of the function *F* ranges from a linear (or almost-linear) system to spiking neuron dynamics [8], yet the structure of the system is universal.

These neurons are interconnected through a connectivity matrix which represents the
strength of the synapses connecting the real neurons together. The slow modification
of the connectivity between the neurons is commonly thought to be the essence of learning.
Unsupervised learning rules update the connectivity exclusively based on the value
of the activity variable. Therefore, this mechanism is represented by the slow equation
above, where
*G* is the learning rule. Probably the most famous of these rules is the Hebbian learning
rule introduced in [9]. It says that if both neurons A and B are active at the same time, then the synapses
from A to B and B to A should be strengthened proportionally to the product of the
activity of A and B. There are many different variations of this correlation-based
principle which can be found in [10,11]. Another recent, unsupervised, biologically motivated learning rule is the spike-timing-dependent
plasticity (STDP) reviewed in [12]. It is similar to Hebbian learning except that it focuses on causation instead of
correlation and that it occurs on a faster time-scale. Both of these types of rule
correspond to *G* being quadratic in **v**.

Previous literature about dynamic learning networks is thick, yet we take a significantly different approach to understand the problem. An historical focus was the understanding of feedforward deterministic networks [13-15]. Another approach consisted in precomputing the connectivity of a recurrent network according to the principles underlying the Hebbian rule [16]. Actually, most of current research in the field is focused on STDP and is based on the precise times of the spikes, making them explicit in computations [17-20]. Our approach is different from the others regarding at least one of the following points: (i) we consider recurrent networks, (ii) we study the evolution of the coupled system activity/connectivity, and (iii) we consider bounded dynamical systems for the activity without asking them to be spiking. Besides, our approach is a rigorous mathematical analysis in a field where most results rely heavily on heuristic arguments and numerical simulations. To our knowledge, this is the first time such models expressed in a slow-fast SDE formalism are analyzed using temporal averaging principles.

The purpose of this application is to understand what the network learns from the exposition to time-dependent inputs. In other words, we are interested in the evolution of the connectivity variable, which evolves on a slow time-scale, under the influence of the external input and some noise added on the fast variable. More precisely, we intend to explicitly compute the equilibrium connectivities of such systems. This final matrix corresponds to the knowledge the network has extracted from the inputs. Although the derivation of the results is mathematically tough for untrained readers, we have tried to extract widely understandable conclusions from our mathematical results and we believe this paper brings novel elements to the debate about the role and mechanisms of learning in large scale networks.

Although the averaging method is a generic principle, we have made significant assumptions to keep the analysis of the averaged system mathematically tractable. In particular, we will assume that the activity evolves according to a linear stochastic differential equation. This is not very realistic when modeling individual neurons, but it seems more reasonable to model populations of neurons; see Chapter 11 of [7].

The paper is organized as follows. Section 2 is devoted to introducing the temporal averaging theory. Theorem 2.2 is the main result of this section. It provides the technical tool to tackle learning neural networks. Section 3 corresponds to application of the mathematical tools developed in the previous section onto the models of learning neural networks. A generic model is described and three different particular models of increasing complexity are analyzed. First, Hebbian learning, then trace-learning, and finally STDP learning are analyzed for linear activities. Finally, Section 4 is a discussion of the consequences of the previous results from the viewpoint of their biological interpretation.

### 2 Averaging principles: theory

In this section, we present multiscale theoretical results concerning stochastic averaging of periodically forced SDEs (Section 2.3). These results combine ideas from singular perturbations, classical periodic averaging and stochastic averaging principles. Therefore, we recall briefly, in Sections 2.1 and 2.2, several basic features of these principles, providing several examples that are closely related to the application developed in Section 3.

#### 2.1 Periodic averaging principle

We present here an example of a slow-fast ordinary differential equation perturbed by a fast external periodic input. We have chosen this example since it readily illustrates many ideas that will be developed in the following sections. In particular, this example shows how the ratio between the time-scale separation of the system and the time-scale of the input appears as a new crucial parameter.

*Example 2.1* Consider the following linear time-inhomogeneous dynamical system with

This system is particularly handy since one can solve analytically the first ordinary differential equation, that is,

where we have introduced the *time-scales ratio*

In this system, one can distinguish various asymptotic regimes when
*μ*:

• Regime 1: Slow input

First, if
*geometric singular perturbation theory*[21,22] one can approximate the slow variable

Now taking the limit
*averaging principle*[4] for periodically driven differential equations, one can approximate

since

• Regime 2: Fast input

If

so that

and when

• Regime 3: Time-scales matching

Now consider the intermediate case where

since

Thus, we have seen in this example that

1. the two limits

2. the ratio *μ* between the internal time-scale separation

#### 2.2 Stochastic averaging principle

Time-scales separation is a key property to investigate the dynamical behavior of
non-linear multiscale systems, with techniques ranging from averaging principles to
geometric singular perturbation theory. This property appears to be also crucial to
understanding the impact of noise. Instead of carrying a small noise analysis, a multiscale
approach based on the *stochastic averaging principle*[2] can be a powerful tool to unravel subtle interplays between noise properties and
non-linearities. More precisely, consider a system of SDEs in

with initial conditions
*F*, *G*, **Σ** smooth functions ensuring the existence and uniqueness for the solution
*p*-dimensional standard Brownian motion, defined on a filtered probability space
*ϵ*, which denotes in this section a single positive real number.

In order to approximate the behavior of
*ϵ*, the idea is to average out the equation for the slow variable with respect to the
stationary distribution of the fast one. More precisely, one first assumes that for
each
*frozen* fast SDE,

admits a unique invariant measure, denoted

and **w** the solution of

**Theorem 2.1***For any*
*and*

As a consequence, analyzing the behavior of the deterministic solution **w** can help to understand useful features of the stochastic process

*Example 2.2* In this example we consider a similar system as in Example 2.1, but with a noise
term instead of the periodic perturbation. Namely, we consider

with
*w* of

where

that is,

Consequently,

Applying (3) leads to the following result: for any

Interestingly, the asymptotic behavior of
*ϵ* is characterized by a deterministic trajectory that depends on the strength *σ* of the noise applied to the system. Thus, the stochastic averaging principle appears
particularly interesting when unraveling the impact of noise strength on slow-fast
systems.

Many other results have been developed since, extending the set-up to the case where
the slow variable has a diffusion component or to infinite-dimensional settings for
instance, and also refining the convergence study, providing *homogenization* results concerning the limit of

#### 2.3 Double averaging principle

Combining ideas of periodic and stochastic averaging introduced previously, we present
here theoretical results concerning multiscale SDEs driven by an external time-periodic
input. Consider

with
*τ*-periodic function and

We further denote

**Definition 2.1** We define the asymptotic time-scale ratio

Accordingly, we denote

The following assumption is made to ensure existence and uniqueness of a strong solution
to system (4). In the following,

**Assumption 2.1** Existence and uniqueness of a strong solution

(i) The functions *F*, *G*, and **Σ** are locally Lipschitz continuous in the space variable **z**. More precisely, for any

(ii) There exists a constant

To control the asymptotic behavior of the fast variable, one further assumes the following.

**Assumption 2.2** Asymptotic behavior of the fast process:

(i) The diffusion matrix **Σ** is bounded

and uniformly non-degenerate

(ii) There exists

According to the value of

• For any fixed

converges exponentially fast to a unique invariant probability measure denoted by

• For any fixed

converges exponentially fast towards
*cf.* the Appendix Theorem A.1).

• For any fixed

where

According to the value of *μ*, we introduce a vector field

**Definition 2.2** We define

*Notation* We may denote the periodic system of measures
*F* and **Σ**. Accordingly, we may denote

We are now able to present our main mathematical result. Extending Theorem 2.1, the
following theorem describes the asymptotic behavior of the slow variable

**Theorem 2.2***Let*
*If***w***is the solution of*

*then the following convergence result holds*, *for all*
*and*

*Remark 2.1*

1. The extremal cases

• Slow input: If one considers the case where the limit
*first*, so that from Theorem 2.1 with fast variable
*t* (with the trivial equation

Then taking the limit

where

• Fast input: If the limit

2. To study the extremal cases
*μ*. Moreover, in terms of applications, this parameter can have a relatively easy interpretation
in terms of the ratio of time-scales between intrinsic neuronal activity and typical
stimulus time-scales in a given situation. Although the zeroth order limit (*i.e.*, the averaged system) seems to depend only on the position of *α* with respect to 1, it seems reasonable to expect that the fluctuations around the
limit would depend on the precise value of *α*. This is a difficult question which may deserve further analysis.

The case

3. By a rescaling of the frozen process (6), one deduces the following *scaling* relationships:

and

Therefore, if one knows, in the case
*F* and a diffusion coefficient *σ*, denoted

4. It seems reasonable to expect that the above result is still valid when considering
ergodic, but not necessarily periodic, time dependency of the function

#### 2.3.1 Case of a fast linear SDE with periodic input

We present here an elementary case where one can compute explicitly the quasi-stationary
time-periodic family of measures

with initial condition
*τ*-periodic function.

We are interested in the large time behavior of the law of
*τ*-periodic family of probability measures

where
*i.e.*,

and **Q** is the unique solution of the Lyapunov equation

Indeed, if one denotes

whose stationary distribution is known to be a centered Gaussian measure with the
covariance matrix **Q** solution of (9); see Chapter 3.2 of [24]. Notice that if **A** is self-adjoint with respect to
*i.e.*,

Hence, in the linear case, the averaged vector field of equation (7) becomes

where

Therefore, due to the linearity of the fast SDE, the periodic system of measure *ν* is just a constant Gaussian distribution shifted by a periodic function of time
*G* is quadratic in **v**, this remark implies that one can perform independently the integral over time and
over

*Example 2.3* In this last example, we consider a combination between Example 2.1 and Example 2.2,
namely we consider the following system of periodically forced SDEs:

As in Example 2.1 and as shown above, the behavior of this system when both
*μ* defined in (5). More precisely, we have the following three regimes:

• Regime 1: slow input:

• Regime 2: fast input:

• Regime 3: time-scale matching:

#### 2.4 Truncation and asymptotic well-posedness

In some cases, Assumptions 2.1-2.2 may not be satisfied on the entire phase space

Let us start with an example, namely the following bi-dimensional system with white noise input:

with

For the fast drift
*w* may reach *l* due to the fluctuations captured in the term
*κ* is not large enough. Such a system may have exponentially growing trajectories. However,
we claim that for small enough *ϵ*,
*w* for a very long time, and if this limit remains below

**Definition 2.3** A stochastic differential equation with a given initial condition is asymptotically
well posed in probability if for the given initial condition,

1. a unique solution exists until a random time

2. for all

We give in the following proposition sufficient conditions for system (4) to be asymptotically well posed in probability and to satisfy conclusions of Theorem 2.2.

Let us introduce the following set of additional assumptions.

**Assumption 2.3** Moment conditions:

(i) There exists

(ii) For any
*K* of

*Remark 2.2* This last set of assumptions will be satisfied in all the applications of Section 3
since we consider linear models with additive noise for the equation of **v**, implying this variable to be Gaussian and the function *G* only involves quadratic moments of **v**; therefore, the moment conditions (i) and (ii) will be satisfied without any difficulty.
Moreover, if one considers non-linear models for the variable **v**, then the Gaussian property may be lost; however, adding sigmoidal non-linearity
has, in general, the effect of bounding the dynamics, thus making these moment assumptions
reasonable to check in most models of interest.

**Property 2.3***If there exists a subset* ℰ *of*
*such that*

1. *The functions**F*, *G*, **Σ***satisfy Assumptions *2.1-2.3 *restricted on*

2. ℰ *is invariant under the flow of*
*as defined in* (7).

*Then for any initial condition*
*system* (4) *is asymptotically well posed in probability and*
*satisfies the conclusion of Theorem *2.2.

*Proof* See Appendix A.2. □

Here, we show that it applies to system (11). First, with

It remains to check that the solution of this system satisfies

that is, the subset

This property is satisfied as soon as

Indeed, one can show that

and that

### 3 Averaging learning neural networks

In this section, we apply the temporal averaging methods derived in Section 2 on models of unsupervised learning neural networks. First, we design a generic learning model and show that one can define formally an averaged system with equation (7). However, going beyond the mere definition of the averaged system seems very difficult and we only manage to get explicit results for simple systems where the fast activity dynamics is linear. In the three last subsections, we push the analysis for three examples of increasing complexity.

In the following, we always consider that the initial connectivity is 0. This is an arbitrary choice but without consequences, because we focus on the regime where there is a single globally stable equilibrium point (see Section 3.2.3).

#### 3.1 A generic learning neural network

We now introduce a large class of stochastic neuronal networks with learning models.
They are defined as coupled systems describing the simultaneous evolution of the activity
of
*activity field* of the network, and
*connectivity matrix*.

Each neuron variable

where the function
*i* and
*i*. The stochastic term
*n*-dimensional Brownian motion, **Σ** is an
**v** or other variables, and
*i*th component of the vector

The input
*i* has mainly two components: the external input
*a priori* a complex combination of post-synaptic potentials coming from many other neurons.
The coefficient
*i.e.*,

where
*t* (to take convolutions into account). In practical cases, they are often taken to
be sigmoidal functions. We abusively redefine
*stochastic learning model* as the following system of SDEs.

**Definition 3.1**

Before applying the general theory of Section 2, let us make several comments about this generic model of neural network with learning. This model is a non-autonomous, stochastic, non-linear slow-fast system.

In order to apply Theorem 2.2, one needs Assumptions 2.1, 2.2, and 2.3 to be satisfied,
restricting the space of possible functions
**Σ**, and *G*. In particular, Assumption 2.2(ii) seems rather restrictive since it excludes systems
with multiple equilibria and suggests that the general theory is more suited to deal
with rate-based networks. However, one should keep in mind that these assumptions
are only sufficient, and that the double averaging principle may work as well in systems
which do not satisfy readily those assumptions.

As we will show in Section 3.3, a particular form of history-dependence can be taken
into account, to a certain extent. Indeed, for instance, if the function ℱ is actually
a functional of the past trajectory of variable

The noise term can be purely additive or set by a particular function

In the following subsection, we apply the averaging theory to various combinations
of neuronal network models, embodied by choices of functions
**Σ**, and various learning rules, embodied by a choice of the function *G*. We will also analyze the obtained averaged system, describing the slow dynamics
of the connectivity matrix in the limit of perfect time-scale separation and, in particular,
study the convergence of this averaged system to an equilibrium point.

#### 3.2 Symmetric Hebbian learning

One of the simplest, yet non-trivial, stochastic learning models is obtained when considering

• A linear model for neuronal activity, namely
*l* a positive constant.

• A linear model for the synaptic transmission, namely

• A constant diffusion matrix **Σ** (additive noise) proportional to the identity

• A Hebbian learning rule with linear decay, namely

This model can be written as follows:

where neurons are assumed to have the same decay constant:
**u** is a periodic continuous input (it replaces
*n*-dimensional Brownian noise.

The first question that arises is about the well-posedness of the system: What is
the definition interval of the solutions of system (12)? Do they explode in finite
time? At first sight, it seems there may be a runaway of the solution if the largest
real part among the eigenvalues of **W** grows bigger than *l*. In fact, it turns out this scenario can be avoided if the following assumption linking
the parameters of the system is satisfied.

**Assumption 3.1** There exists

where

It corresponds to making sure the external (*i.e.*,
*i.e.*, *σ*) excitations are not too large compared to the decay mechanism (represented by *κ* and *l*). Note that if
*d* are fixed, it is sufficient to increase *κ* or *l* for this assumption to be satisfied.

Under this assumption, the space

is invariant by the flow of the averaged system
**W** is semi-definite positive and

**Theorem 3.1***If Assumption *3.1 *is verified for*
*then system* (12) *is asymptotically well posed in probability and the connectivity matrix*
*the solution of system* (12), *converges to***W***in the sense that for all*

*where***W***is the deterministic solution of*

*where*
*is the*
*periodic attractor of*
*where*
*is supposed to be fixed*.

*Proof* See Theorem B.1 in Appendix B.2. □

In the following, we focus on the averaged system described by (13). Its right-hand side is made of three terms: a linear and homogeneous decay, a correlation term, and a noise term. The last two terms are made explicit in the following.

#### 3.2.1 Noise term

As seen in Section 2, in the linear case, the noise term **Q** is the unique solution of the Lyapunov equation (9) with

In more complicated cases, the computation of this term appears to be much more difficult as we will see in Section 3.4.

#### 3.2.2 Correlation term

This term corresponds to the auto-correlation of neuronal activity. It is only implicitly
defined; thus, this section is devoted to finding an explicit form depending only
on the parameters *l*, *μ*, *τ*, the connectivity **W**, and the inputs **u**. Actually, one can perform an expansion of this term with respect to a small parameter
corresponding to a *weakly connected expansion*. Most terms vanish if the connectivity **W** is small compared to the strength of the intrinsic decaying dynamics of neurons *l*.

The auto-correlation term of a

With this notation, it is simple to think of **v** as a ‘semi-continuous matrix’ of

It is common knowledge, see [17] for instance, that this term gathers information about the correlation of the inputs.
Indeed, if we assume that the input is sufficiently slow, then

Actually, without the assumption of a slow input, lagged correlations of the input
appear in the averaged system. Before giving the expression of these temporal correlations,
we need to introduce some notations. First, define the convolution filter
*H* is the Heaviside function. This family of functions is displayed for different values
of

We also define the filtered correlation of the inputs

where
*k*th convolution of

which motivates the definition of the

**Fig. 1.** This shows the
*i.e.*, the functions
*k* ranging from 0 to 6. For

Observe that

We intend to express the correlation term as an infinite converging sum involving
these filtered correlations. In this perspective, we use a result we have proved in
[25] to write the solution of a general class of non-autonomous linear systems (*e.g.*,

**Lemma 3.2***If*
*is the solution*, *with zero as initial condition*, *of*
*it can be written by the sum below which converges if***W***is in*
*for*

*where*

*Proof* See Lemma B.2 in Appendix B.2. □

This is a decomposition of the solution of a linear differential system on the basis of operators where the spatial and temporal parts are decoupled. This important step in a detailed study of the averaged equation cannot be achieved easily in models with non-linear activity. Everything is now set up to introduce the explicit expansion of the correlation we are using in what follows. Indeed, we use the previous result to rewrite the correlation term as follows.

**Property 3.3***The correlation term can be written*

*Proof* See Theorem B.3 in Appendix B.2. □

This infinite sum of convolved filters is reminiscent of a property of Hawkes processes that have a linear input-output gain [26].

The speed of inputs characterized by *μ* only appears in the temporal profiles
*i.e.*,

Therefore, the averaged equation can be explicitly rewritten

In Figure 2, we illustrate this result by comparing, for different
*i.e.*, we choose

**Fig. 2.** The first two figures, (**a**) and (**b**), represent the evolution of the connectivity for original stochastic system (12),
superimposed with averaged system (13), for two different values of *ϵ*: respectively
*ϵ*, the better the approximation. This can be seen in the picture (**c**) where we have plotted the precision on the *y*-axis and *ϵ* on the *x*-axis. The parameters used here are

#### 3.2.3 Global stability of the equilibrium point

Now that we have found an explicit formulation for the averaged system, it is natural
to study its dynamics. Actually, we prove in the following that if the connectivity
**W** is kept smaller than
*i.e.*, Assumption 3.1 is verified with
*F* is a contraction operator on

**Theorem 3.4***If Assumption *3.1 *is verified for*
*then there is a unique equilibrium point in the invariant subset*
*which is globally*, *asymptotically stable*.

*Proof* See Theorem B.4 in Appendix B.2. □

The fact that the equilibrium point is unique means that the ‘knowledge’ of the network about its environment (corresponding by hypothesis to the connectivity) eventually is unique. For a given input and any initial condition, the network can only converge to the same ‘knowledge’ or ‘understanding’ of this input.

#### 3.2.4 Explicit expansion of the equilibrium point

When the network is weakly connected, the high-order terms in expansion (15) may be neglected. In this section, we follow this idea and find an explicit expansion for the equilibrium connectivity where the strength of the connectivity is the small parameter enabling the expansion. The weaker the connectivity, the more terms can be neglected in the expansion.

In fact, it is not natural to speak about a weakly connected learning network since
the connectivity is a variable. However, we are able to identify a *weak connectivity index* which controls the strength of the connectivity. We say the connectivity is weak
when it is negligible compared to the intrinsic leak term, *i.e.*,

In the asymptotic regime

With these, we can prove that the equilibrium connectivity

**Theorem 3.5**

*Proof* See Theorem B.5 in Appendix B.2. □

At the first order, the final connectivity is

**Fig. 3.** (**a**) shows the temporal evolution of the input to a
**b**) shows the correlation matrix of the inputs. The off-diagonal terms are null because
the two patterns are spatially orthogonal. (**c**), (**d**), and (**e**) represent the first order of Theorem 3.5 expansion for different *μ*. Actually, this approximation is quite good since the percentage of error between
the averaged system and the first order, computed by
^{−4}% for the three figures. These figures make it possible to observe the role of *μ*. If *μ* is small, *i.e.*, the inputs are slow, then the transient can be neglected and the learned connectivity
is roughly the correlation of the inputs; see (a). If *μ* increases, *i.e.*, the inputs are faster, then the connectivity starts to encode a link between the
two patterns that were flashed circularly and elicited responses that did not fade
away when the other pattern appeared. The temporal structure of the inputs is also
learned when *μ* is large. The parameters used in this figure are

Not only the spatial correlation is encoded in the weights, but there is also some
information about the temporal correlation, *i.e.*, two successive but orthogonal events occurring in the inputs will be wired in the
connectivity although they do not appear in the spatial correlations; see Figure 3 for an example.

#### 3.3 Trace learning: band-pass filter effect

In this section, we study an improvement of the learning model by adding a certain form of history dependence in the system and explain the way it changes the results of the previous section. Given that Theorem 2.2 only applies to an instantaneous process, we will only be able to treat the history-dependent systems which can be reformulated as instantaneous processes. Actually, this class of systems contains models which are biologically more relevant than the previous model and which will exhibit interesting additional functional behaviors. In particular, this covers the following features:

• Trace learning.

It is likely that a biological learning rule will integrate the activity over a short time. As Földiàk suggested in [27], it makes sense to consider the learning equation as being

where ∗ is the convolution and

• Damped oscillatory neurons.

Many neurons have an oscillatory behavior. Although we cannot take this into account in a linear model, we can model a neuron by a damped oscillator, which also introduces a new important time-scale in the system. Adding adaptation to neuronal dynamics is an elementary way to implement this idea. This corresponds to modeling a single neuron without inputs by the equivalent formulations

• Dynamic synapses.

The electro-chemical process of synaptic communication is very complicated and non-linear.
Yet, one of the features of synaptic communication we can take into account in a linear
model is the shape of the post-synaptic potentials. In this section, we consider that
each synapse is a linear filter whose finite impulse response (*i.e.*, the post-synaptic potential) has the shape

For mathematical tractability, we assume in the following that
*i.e.*, the time-scales of the neurons, those of the synapses and those of the learning
windows are the same. Actually, there is a large variety of temporal scales of neurons,
synapses, and learning windows, which makes this assumption not absurd. Besides, in
many brain areas, examples of these time constants are in the same range (≃10 ms).
Yet, investigating the impact of breaking this assumption would be necessary to model
better biological networks. This leads to the following system:

where the notations are the same as in Section 3.2. The behavior of a single neuron
will be oscillatory damped if
*i.e.*,

To comply with the hypotheses of Theorem 2.2 (*i.e.*, no dependence of the history of the process), we can add a variable **z** to the system which takes care of integrating the variable **v** over an exponential window. It leads to the equivalent system (in the limit

This trick makes it possible to deal with some history-based processes where the dependence on the past is exponential.

It turns out most of the results of Section 3.2 remain true for system (16) as detailed
in the following. The existence of the solution on

where

where
*v* is computed in Appendix C.3. It leads to

Therefore, the averaged system can be rewritten

As before, the existence and uniqueness of a globally attractive equilibrium point
is guaranteed if Assumption 3.1 is verified for

Besides, the weakly connected expansion of the equilibrium point we did in Section 3.2.4 can be derived in this case (see Theorem B.10). At the first order, this leads to the equilibrium connectivity

The second order is given in Theorem B.10. The main difference with the Hebbian linear case is the shape of the temporal filters. Actually, the temporal filters have an oscillatory damped behavior if Δ is purely imaginary. These two cases are compared in Figure 4.

**Fig. 4.** These represent the temporal filter
**a**) When
**b**) When the dynamics of the neurons and synapse are oscillatory damped, some oscillations
appear in the temporal filters. The number of oscillations depends on Δ. If Δ is real,
then there are no oscillations as in the previous case. However, when Δ becomes a
pure imaginary number, it creates a few oscillations which are even more numerous
if

These oscillatory damped filters have the effect of amplifying a particular frequency
of the input signal. As shown in Figure 5, if Δ is a pure imaginary number, then

**Fig. 5.** This is the spectral profile
*l* increases beyond

#### 3.4 Asymmetric ‘STDP’ learning with correlated noise

Here, we extend the results to temporally asymmetric learning rules and spatially correlated noise. We consider a learning rule that is similar to the spike-timing-dependent plasticity (STDP) which is closer to biological experiments than the previous Hebbian rules. It has been observed that the strength of the connection between two neurons depends mainly on the difference between the time of the spikes emitted by each neuron as shown in Figure 6; see [12].

**Fig. 6.** This figure represents the synapse strength modification when the post-synaptic neuron
emits a spike. The *y*-axis corresponds to an additive or multiplicative update of the connectivity. For
instance, in [28], this is
*x*-axis is the time at which a pre-synaptic spike reaches the synapse, relatively to
the time of post-synaptic time chosen to be 0.

Assuming that the decay time of the positive and negative parts of Figure 6 are equal, we approximate this function by
*i* is spiking at time *t*, and then it counts the number of previous spikes from the pre-synaptic neuron *j* that might have caused the post-synaptic spike. This calculus is weighted by an exponentially
decaying function. This accounts for the left part of Figure 6. The last term
*j* is spiking and counts the number of previous spikes from the post-synaptic neuron
*i* that are not likely to have been caused by the pre-synaptic neuron. The computation
is also weighted by the mirrored function of an exponentially decaying function. This
accounts for the right part of Figure 6. This leads to the coupled system

where the non-linear intrinsic dynamics of the neurons is represented by *f*. Indeed, the term
*i*. Therefore, it records the value of the pre-synaptic membrane potential weighted
by the function

Actually, this formulation is valid for any non-linear activity with correlated noise.
However, studying the role of STDP in spiking networks is beyond the scope of this
paper since we are only able to have explicit results for models with linear activity.
Therefore, we will assume that the activity is linear while keeping the learning rule
as it was derived in the spiking case, *i.e.*, we assume

We also use the trick of adding additional variables to get rid of the history-dependency. This reads

In this framework, the method exposed in Section 3.2 holds with small changes. First, the well-posedness assumption becomes

**Assumption 3.2** There exists

where

Under this assumption, the system is asymptotically well posed in probability (Theorem B.11). And we show the averaged system is

where we have used Theorem B.12 to expand the correlation term. The noise term **Q** is equal to

According to Theorem B.13, the system is also globally asymptotically convergent to a single equilibrium, which we study in the following.

We perform a weakly connected expansion of the equilibrium connectivity of system (18). As shown in Theorem B.14, the first order of the expansion is

Writing
**S** is symmetric and **A** is skew-symmetric, leads to

According to Lemma C.1, the symmetric part is very similar to the trace learning case in Section 3.3. Applying Lemma C.2 leads to

Therefore, the STDP learning rule simply adds an antisymmetric part to the final
connectivity keeping the symmetric part as the Hebbian case. Besides, the antisymmetric
part corresponds to computing the cross-correlation of the inputs with its derivative.
For high-order terms, this remains true although the temporal profiles are different
from the first order. These results are in line with previous works underlying the
similarity between STDP learning and differential Hebbian learning, where

Figure 7 shows an example of purely antisymmetric STDP learning, *i.e.*,

**Fig. 7.** Antisymmetric STDP learning for a network of
**a**) Temporal evolution of the inputs to the network. The three neurons are successively
and periodically excited. The *red color* corresponds to an excitation of 1 and the *blue* to no excitation. (**b**) Equilibrium connectivity. The matrix is antisymmetric and shows that neurons excite
one of their neighbors and are inhibited by the other. (**c**) Temporal evolution of the connectivity strength. The colors correspond to those
of (b). The connectivity of system (17) corresponds to the *plain thin oscillatory curves*. The connectivity of the averaged system (18) (with
*plain thick lines*. Note that each curve corresponds to the superposition of three connections which
remain equal through learning. The *dashed curves* correspond to the antisymmetric part in (19). The parameters chosen for this simulation
were

### 4 Discussion

We have applied temporal averaging methods on slow/fast systems modeling the learning mechanisms occurring in linear stochastic neural networks. When we make sure the connectivity remains small, the dynamics of the averaged system appears to be simple: the connectivity always converges to a unique equilibrium point. Then, we performed a weakly connected expansion of this final connectivity whose terms are combinations of the noise covariance and the lagged correlations of the inputs: the first-order term is simply the sum of the noise covariance and the correlation of the inputs.

• As opposed to the former input/ouput vision of the neurons, we have considered the
membrane potential **v** to be the solution of a dynamical system. The consequence of this modeling choice
is that not only the spatial correlations, but also the temporal correlations are
learned. Due to the fact we take the transients into account, the activity never converges
but it lives between the representation of the inputs. Therefore, it links concepts
together.

The parameter *μ* is the ratio of the time-scales between the inputs and the activity variable. If
*μ* grows, the dynamics becomes more and more transient, it has no time to converge.
Therefore, if the inputs are extremely slow, the network only learns the spatial correlation
of the inputs. If the inputs are fast, it also learns the temporal correlations. This
is illustrated in Figure 3.

This suggests that learning associations between concepts, for instance, learning
words in two different languages, may be optimized by presenting two words to be associated
circularly with a certain frequency. Indeed, increasing the frequency (with a fixed
duration of exposition to each word) amounts to increasing *μ*. Therefore, the network learns better the temporal correlations of the inputs and
thus strengthens the link between these two concepts.

• According to the model of resonator neuron [30], Section 3.3 suggests that neurons and synapses with a preferred frequency of oscillation will preferably extract the correlation of the inputs filtered by a band pass filter centered on the intrinsic frequency of the neurons.

Actually, it has been observed that the auditory cortex is tonotopically organized,
*i.e.*, the neurons are arranged by frequency [31]. It is traditionally thought that this is achieved thanks to a particular connectivity
between the neurons. We exhibit here another mechanism to select this frequency which
is solely based on the parameters of the neurons: a network with a lot of different
neurons whose intrinsic frequencies are uniformly spread is likely to perform a Fourier-like
operation: decomposing the signal by frequency.

In particular, this emphasizes the fact that the network does not treat space and time similarly. Roughly speaking, associating several pictures and associating several sounds are therefore two different tasks which involve different mechanisms.

• In this paper, the original hierarchy of the network has been neglected: the network
is made of neurons which receive external inputs. A natural way to include a hierarchical
structure (with layers for instance), without changing the setup of the paper, is
therefore to remove the external input to some neurons. However, according to Theorem 3.5
(and its extensions Theorems B.10 and B.14), one can see that these neurons will be
disconnected from the others at the first order (if the noise is spatially uncorrelated).
Linear activities imply that the high level neurons disconnect from others, which
is a problem. In fact, one can observe that the second-order term in Theorem 3.5 is
not null if the noise matrix **Σ** is not diagonal. In fact, this is the noise between neurons which will recruit the
high level neurons to build connections from and to them.

It is likely that a significant part of noise in the brain is locally induced, *e.g.*, local perturbations due to blood vessels or local chemical signals. In a way, the
neurons close to each other share their noise and it seems reasonable to choose the
matrix **Σ** so that it reflects the biological proximity between neurons. In fact, **Σ** specifies the original structure of the network and makes it possible for close-by
neurons to recruit each other.

Another idea to address hierarchy in networks would be to replace the synaptic decay
term

• It is also interesting to observe that most of the noise contribution to the equilibrium
connectivity for STDP learning (see Theorem B.14) vanishes if the learning is purely
skew-symmetric, *i.e.*,
*i.e.*, the Hebbian mechanism, that writes the noise in the connectivity.

• We have shown that there is a natural analogous STDP learning for spiking neurons in our case of linear neurons. This asymmetric rule converges to a final connectivity which can be decomposed into symmetric and skew-symmetric parts. The first one is similar to the symmetric Hebbian learning case, emphasizing that the STDP is nothing more than an asymmetric Hebbian-like learning rule. The skew-symmetric part of the final connectivity is the cross-correlation between the inputs and their derivatives.

This has an interesting signification when looking at the spontaneous activity of
the network post-learning. In fact, if we assume that the inputs are generated by
an autonomous system

In a way, the noise terms generate random patterns which tend to be forgotten by
the network due to the leak term
*i.e.*,

There are still numerous challenges to carry on in this direction.

First, it seems natural to look for an application of these mathematical methods to more realistic models. The two main limitations of the class of models we study in Section 3 are (i) the activity variable is governed by a linear equation and (ii) all the neurons are assumed to be identical. The mathematical analysis in this paper was made possible by the assumption that the neural network has a linear dynamics, which does not reflect the intrinsic non-linear behavior of the neurons. However, the cornerstone of the application of temporal averaging methods to a learning neural network, namely Property 3.3, is similar to the behavior of Poisson processes [26] which has useful applications for learning neural networks [19,20]. This suggests that the dynamics studied in this paper might be quite similar to some non-linear network models. Studying more rigorously the extension of the present theory to non-linear and heterogeneous models is the next step toward a better modeling of biologically plausible neural networks.

Second, we have shown that the equilibrium connectivity was made of a symmetric and
antisymmetric term. In terms of statistical analysis of data sets, the symmetric part
corresponds to classical correlation matrices. However, the antisymmetric part suggests
a way to improve the purely correlation-based approach used in many statistical analyses
(*e.g.*, PCA) toward a causality-oriented framework which might be better suited to deal
with dynamical data.

### Appendix A: Stochastic and periodic averaging

#### A.1 Long-time behavior of inhomogeneous Markov processes

In order to construct the averaged vector field

Under regularity and dissipativity conditions, [5] proves the following general result about the asymptotic behavior of the solution of

where
*τ*-periodic.

The first point of the following theorem gives the definition of *evolution systems of measures*, which generalizes the notion of invariant measures in the case of inhomogeneous
Markov processes. The exponential estimate of 2. in the following theorem is a key
point to prove the averaging principle of Theorem 2.2.

**Theorem A.1** ([5])

1. *There exists a unique**τ*-*periodic family of probability measures*
*such that for all functions**ϕ**continuous and bounded*,

*Such a family is called evolution systems of measures*.

2. *Furthermore*, *under stronger dissipativity condition*, *the convergence of the law of**X**to**μ**is exponentially fast*. *More precisely*, *for any*
*there exist*
*and*
*such that for all**ϕ**in the space of**p*-*integrable functions with respect to*

#### A.2 Proof of Property 2.3

**Property A.2***If there exists a smooth subset* ℰ *of*
*such that*

1. *The functions**F*, *G*, **Σ***satisfy Assumptions *2.1-2.3 *restricted on*

2. ℰ *is invariant under the flow of*
*as defined in* (7).

*Then for any initial condition*
*system* (4) *is asymptotically well posed in probability and*
*satisfies the conclusion of Theorem *2.2.

*Proof* The idea of the proof is to truncate the original system, replacing *G* by a smooth truncation which coincides with *G* on ℰ and which is close to 0 outside ℰ. More precisely, for

Then, we introduce

with the same initial condition as

Let

We also introduce the following stopping times:

Finally, we define

Let us remark at this point that in order to prove that
*T* inside ℰ are not problematic. Therefore, we introduce

Our first claim is that on finite time intervals
*β* sufficiently small. To prove our claim, we proceed in two steps, first working inside

1. For any
*C* (which may depend on

We conclude by an application of the Markov inequality, implying