Skip to main content

Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons

Abstract

We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis.

Mathematics Subject Classification (2000):60F99, 60B10, 92B20, 82C32, 82C80, 35Q80.

1 Introduction

Cortical activity displays highly complex behaviors which are often characterized by the presence of noise. Reliable responses to specific stimuli often arise at the level of population assemblies (cortical areas or cortical columns) featuring a very large number of neuronal cells, each of these presenting a highly nonlinear behavior, that are interconnected in a very intricate fashion. Understanding the global behavior of large-scale neural assemblies has been a great endeavor in the past decades. One of the main interests of large-scale modeling is characterizing brain functions, which most imaging techniques are recording. Moreover, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50 μm to 1 mm, containing the order of 100 to 100,000 neurons belonging to a few different types. These columns have specific functions; for example, in the human visual area V1, they respond to preferential orientations of bar-shaped visual stimuli. In this case, information processing does not occur at the scale of individual neurons but rather corresponds to an activity integrating the individual dynamics of many interacting neurons and resulting in a mesoscopic signal arising through averaging effects, and this effectively depends on a few effective control parameters. This vision, inherited from statistical physics, requires that the space scale be large enough to include sufficiently many neurons and small enough so that the region considered is homogeneous. This is, in effect, the case of the cortical columns.

In the field of mathematics, studying the limits of systems of particle systems in interaction has been a long-standing problem and presents many technical difficulties. One of the questions addressed in mathematics was to characterize the limit of the probability distribution of an infinite set of interacting diffusion processes, and the fluctuations around the limit for a finite number of processes. The first breakthroughs to find answers to this question are due to Henry McKean (see, e.g. [1, 2]). It was then investigated in various contexts by a large number of authors such as Braun and Hepp [3], Dawson [4] and Dobrushin [5], and most of the theory was achieved by Tanaka and collaborators [69] and of course Sznitman [1012]. When considering that all particles (in our case, neurons) have the same, independent initial condition, they are mathematically proved using stochastic theory (the Wasserstein distance, large deviation techniques) that in the limit where the number of particles tends to infinity, any finite number of particles behaves independently of the other ones, and they all present the same probability distribution, which satisfies a nonlinear Markov equation. Finite-size fluctuations around the limit are derived in a general case in [10]. Most of these models use a standard hypothesis of global Lipschitz continuity and linear growth condition of the drift and diffusion coefficients of the diffusions, as well as the Lipschitz continuity of the interaction function. Extensions to discontinuous càdlàg processes including singular interactions (through a local time process) were developed in [11]. Problems involving singular interaction variables (e.g. nonsmooth functions) are also widely studied in the field, but are not relevant in our case.

In the present article, we apply this mathematical approach to the problem of interacting neurons arising in neuroscience. To this end, we extend the theory to encompass a wider class of models. This implies the use of locally (instead of globally) Lipschitz coefficients and of a Lyapunov-like growth condition replacing the customary linear growth assumption for some of the functions appearing in the equations. The contributions of this article are fourfold:

  1. 1.

    We derive, in a rigorous manner, the mean-field equations resulting from the interaction of infinitely many neurons in the case of widely accepted models of spiking neurons and synapses.

  2. 2.

    We prove a propagation of chaos property which shows that in the mean-field limit, the neurons become independent, in agreement with some recent experimental work [13] and with the idea that the brain processes information in a somewhat optimal way.

  3. 3.

    We show, numerically, that the mean-field limit is a good approximation of the mean activity of the network even for fairly small sizes of neuronal populations.

  4. 4.

    We suggest, numerically, that the changes in the dynamics of the mean-field limit when varying parameters can be understood by studying the mean-field Fokker-Planck equation.

We start by reviewing such models in the ‘Spiking conductance-based models’ section to motivate the present study. It is in the ‘Mean-field equations for conductance-based models’ section that we provide the limit equations describing the behaviors of an infinite number of interacting neurons and state and prove the existence and uniqueness of solutions in the case of conductance-based models. The detailed proof of the second main theorem, that of the convergence of the network equations to the mean-field limit, is given in the Appendix. In the ‘Numerical simulations’ section, we begin to address the difficult problem of the numerical simulation of the mean-field equations and show some results indicating that they may be an efficient way of representing the mean activity of a finite-size network as well as to study the changes in the dynamics when varying biological parameters. The final ‘Discussion and conclusion’ section focuses on the conclusions of our mathematical and numerical results and raises some important questions for future work.

2 Spiking conductance-based models

This section sets the stage for our results. We review in the ‘Hodgkin-Huxley model’ section the Hodgkin-Huxley model equations in the case where both the membrane potential and the ion channel equations include noise. We then proceed in the ‘The FitzHugh-Nagumo model’ section with the FitzHugh-Nagumo equations in the case where the membrane potential equation includes noise. We next discuss in the ‘Models of synapses and maximum conductances’ section the connectivity models of networks of such neurons, starting with the synapses, electrical and chemical, and finishing with several stochastic models of the synaptic weights. In the ‘Putting everything together’ section, we write the network equations in the various cases considered in the previous section and express them in a general abstract mathematical form that is the one used for stating and proving the results about the mean-field limits in the ‘Mean-field equations for conductance-based models’ section. Before we jump into this, we conclude in the ‘Mean-field methods in computational neuroscience: a quick overview’ section with a brief overview of the mean-field methods popular in computational neuroscience.

From the mathematical point of view, each neuron is a complex system, whose dynamics is often described by a set of stochastic nonlinear differential equations. Such models aim at reproducing the biophysics of ion channels governing the membrane potential and therefore the spike emission. This is the case of the classical model of Hodgkin and Huxley [14] and of its reductions [1517]. Simpler models use discontinuous processes mimicking the spike emission by modeling the membrane voltage and considering that spikes are emitted when it reaches a given threshold. These are called integrate-and-fire models [18, 19] and will not be addressed here. The models of large networks we deal with here therefore consist of systems of coupled nonlinear diffusion processes.

2.1 Hodgkin-Huxley model

One of the most important models in computational neuroscience is the Hodgkin-Huxley model. Using pioneering experimental techniques of that time, Hodgkin and Huxley [14] determined that the activity of the giant squid axon is controlled by three major currents: voltage-gated persistent K + current with four activation gates, voltage-gated transient Na + current with three activation gates and one inactivation gate, and Ohmic leak current, I L , which is carried mostly by chloride ions ( Cl ). In this paper, we only use the space-clamped Hodgkin-Huxley model which we slightly generalize to a stochastic setting in order to better take into account the variability of the parameters. The advantages of this model are numerous, and one of the most prominent aspects in its favor is its correspondence with the most widely accepted formalism to describe the dynamics of the nerve cell membrane. A very extensive literature can also be found about the mathematical properties of this system, and it is now quite well understood.

The basic electrical relation between the membrane potential and the currents is simply:

C d V d t = I ext (t) I K I Na I L ,

where I ext (t) is an external current. The detailed expressions for I K , I Na and I L can be found in several textbooks, e.g. [17, 20]:

I K = g ¯ K n 4 ( V E K ) , I Na = g ¯ Na m 3 h ( V E Na ) , I L = g L ( V E L ) ,

where g ¯ K (respectively, g ¯ Na ) is the maximum conductance of the potassium (respectively, the sodium) channel; g L is the conductance of the Ohmic channel; and n (respectively, m) is the activation variable for K + (respectively, for Na). There are four (respectively, three) activation gates for the K + (respectively, the Na) current which accounts for the power 4 (respectively, 3) in the expression of I K (respectively I Na ). h is the inactivation variable for Na. These activation/deactivation variables, denoted by x{n,m,h} in what follows, represent a proportion (they vary between 0 and 1) of open gates. The proportions of open channels are given by the functions n 4 and m 3 h. The proportions of open gates can be computed through a Markov chain modeling assuming the gates to open with rate ρ x (V) (the dependence in V accounts for the voltage-gating of the gate) and to close with rate ζ x (V). These processes can be shown to converge, under standard assumptions, towards the following ordinary differential equations:

x ˙ = ρ x (V)(1x) ζ x (V)x,x{n,m,h}.

The functions ρ x (V) and ζ x (V) are smooth functions whose exact values can be found in several textbooks such as the ones cited above. Note that half of these six functions are unbounded when the voltage goes to −∞, being of the form k 1 e k 2 V , with k 1 and k 2 as two positive constants. Since these functions have been fitted to experimental data corresponding to values of the membrane potential between roughly −100 and 100 mVs, it is clear that extremely large in magnitude and negative values of this variable do not have any physiological meaning. We can therefore safely, smoothly perturb these functions so that they are upper-bounded by some large (but finite) positive number for these values of the membrane potential. Hence, the functions ρ x and ζ x are bounded and Lipschitz continuous for x{n,m,h}. A more precise model taking into account the finite number of channels through the Langevin approximation results in the stochastic differential equationFootnote 1

d x t = ( ρ x ( V ) ( 1 x ) ζ x ( V ) x ) dt+ ρ x ( V ) ( 1 x ) + ζ x ( V ) x χ(x)d W t x ,

where W t x and x{n,m,h} are independent standard Brownian motions. χ(x) is a function that vanishes outside (0,1). This guarantees that the solution remains a proportion, i.e. lies between 0 and 1 for all times. We define

σ x (V,x)= ρ x ( V ) ( 1 x ) + ζ x ( V ) x χ(x).
(1)

In order to complete our stochastic Hodgkin-Huxley neuron model, we assume that the external current I ext (t) is the sum of a deterministic part, noted as I(t), and a stochastic part, a white noise with variance σ ext built from a standard Brownian motion W t independent of W t x and x{n,m,h}. Considering the current produced by the income of ion through these channels, we end up with the following system of stochastic differential equations:

{ C d V t = ( I ( t ) g ¯ K n 4 ( V E K ) g ¯ Na m 3 h ( V E Na ) g ¯ L ( V E L ) ) d t C d V t = + σ ext d W t , d x t = ( ρ x ( V ) ( 1 x ) ζ x ( V ) x ) d t + σ x ( V , x ) d W t x , x { n , m , h } .
(2)

This is a stochastic version of the Hodgkin-Huxley model. The functions ρ x and ζ x are bounded and Lipschitz continuous (see discussion above). The functions n, m and h are bounded between 0 and 1; hence, the functions n 4 and m 3 h are Lipschitz continuous.

To illustrate the model, we show in Figure 1 the time evolution of the three ion channel variables n, m and h as well as that of the membrane potential V for a constant input I=20.0. The system of ordinary differential equations (ODEs) has been solved using a Runge-Kutta scheme of order 4 with an integration time step Δt=0.01. In Figure 2, we show the same time evolution when noise is added to the channel variables and the membrane potential.

Fig. 1
figure 1

Solution of the noiseless Hodgkin-Huxley model. Left: time evolution of the three ion channel variables n, m and h. Right: corresponding time evolution of the membrane potential. Parameters are given in the text.

Fig. 2
figure 2

Noisy Hodgkin-Huxley model. Left: time evolution of the three ion channel variables n, m and h. Right: corresponding time evolution of the membrane potential. Parameters are given in the text.

For the membrane potential, we have used σ ext =3.0 (see Equation 2), while for the noise in the ion channels, we have used the following χ function (see Equation 1):

χ(x)={ Γ e Λ / ( 1 ( 2 x 1 ) 2 )  if  0 < x < 1 0  if  x 0 x 1
(3)

with Γ=0.1 and Λ=0.5 for all the ion channels. The system of SDEs has been integrated using the Euler-Maruyama scheme with Δt=0.01.

Because the Hodgkin-Huxley model is rather complicated and high-dimensional, many reductions have been proposed, in particular to two dimensions instead of four. These reduced models include the famous FitzHugh-Nagumo and Morris-Lecar models. These two models are two-dimensional approximations of the original Hodgkin-Huxley model based on quantitative observations of the time scale of the dynamics of each variable and identification of variables. Most reduced models still comply with the Lipschitz and linear growth conditions ensuring the existence and uniqueness of a solution, except for the FitzHugh-Nagumo model which we now introduce.

2.2 The FitzHugh-Nagumo model

In order to reduce the dimension of the Hodgkin-Huxley model, FitzHugh [15, 16, 21] introduced a simplified two-dimensional model. The motivation was to isolate conceptually essential mathematical features yielding excitation and transmission properties from the analysis of the biophysics of sodium and potassium flows. Nagumo and collaborators [22] followed up with an electrical system reproducing the dynamics of this model and studied its properties. The model consists of two equations, one governing a voltage-like variable V having a cubic nonlinearity and a slower recovery variable w. It can be written as:

{ V ˙ = f ( V ) w + I ext , w ˙ = c ( V + a b w ) ,
(4)

where f(V) is a cubic polynomial in V which we choose, without loss of generality, to be t(V)=V V 3 /3. The parameter I ext models the input current the neuron receives; the parameters a, b>0 and c>0 describe the kinetics of the recovery variable w. As in the case of the Hodgkin-Huxley model, the current I ext is assumed to be the sum of a deterministic part, noted I, and a stochastic white noise accounting for the randomness of the environment. The stochastic FitzHugh-Nagumo equation is deduced from Equation 4 and reads:

{ d V t = ( V t V t 3 3 w t + I ) d t + σ ext d W t , d w t = c ( V t + a b w t ) d t .
(5)

Note that because the function f(V) is not g lobally Lipschitz continuous (only locally), the well-posedness of the stochastic differential equation (Equation 5) does not follow immediately from the standard theorem which assumes the global Lipschitz continuity of the drift and diffusion coefficients. This question is settled below by Proposition 1.

We show in Figure 3 the time evolution of the adaptation variable and the membrane potential in the case where the input I is constant and equal to 0.7. The left-hand side of the figure shows the case with no noise while the right-hand side shows the case where noise of intensity σ ext =0.25 (see Equation 5) has been added.

Fig. 3
figure 3

Time evolution of the membrane potential and the adaptation variable in the FitzHugh-Nagumo model. Left: without noise. Right: with noise. See text.

The deterministic model has been solved with a Runge-Kutta method of order 4, while the stochastic model, with the Euler-Maruyama scheme. In both cases, we have used an integration time step Δt=0.01.

2.3 Partial conclusion

We have reviewed two main models of space-clamped single neurons: the Hodgkin-Huxley and FitzHugh-Nagumo models. These models are stochastic, including various sources of noise: external and internal. The noise sources are supposed to be independent Brownian processes. We have shown that the resulting stochastic differential Equations 2 and 5 were well-posed. As pointed out above, this analysis extends to a large number of reduced versions of the Hodgkin-Huxley such as those that can be found in the book [17].

2.4 Models of synapses and maximum conductances

We now study the situation in which several of these neurons are connected to one another forming a network, which we will assume to be fully connected. Let N be the total number of neurons. These neurons belong to P populations, e.g. pyramidal cells or interneurons. If the index of a neuron is i, 1iN, we note p(i)=α, 1αP as the population it belongs to. We note N p ( i ) as the number of neurons in population p(i). Since we want to be as close to biology as possible while keeping the possibility of a mathematical analysis of the resulting model, we consider two types of simplified, but realistic, synapses: chemical and electrical or gap junctions. The following material concerning synapses is standard and can be found in textbooks [20]. The new, and we think important, twist is to add noise to our models. To unify notations, in what follows, i is the index of a postsynaptic neuron belonging to population α=p(i), and j is the index of a presynaptic neuron to neuron i belonging to population γ=p(j).

2.4.1 Chemical synapses

The principle of functioning of chemical synapses is based on the release of a neurotransmitter in the presynaptic neuron synaptic button, which binds to specific receptors on the postsynaptic cell. This process, similar to the currents described in the Hodgkin and Huxley model, is governed by the value of the cell membrane potential. We use the model described in [20, 23], which features a quite realistic biophysical representation of the processes at work in the spike transmission and is consistent with the previous formalism used to describe the conductances of other ion channels. The model emulates the fact that following the arrival of an action potential at the presynaptic terminal, a neurotransmitter is released in the synaptic cleft and binds to the postsynaptic receptor with a first order kinetic scheme. Let j be a presynaptic neuron to the postynaptic neuron i. The synaptic current induced by the synapse from j to i can be modelled as the product of a conductance g i j with a voltage difference:

I i j syn = g i j (t) ( V i V rev i j ) .
(6)

The synaptic reversal potentials V rev i j are approximately constant within each population: V rev i j := V rev α γ . The conductance g i j is the product of the maximum conductance J i j (t) with a function y j (t) that denotes the fraction of open channels and depends only upon the presynaptic neuron j:

g i j (t)= J i j (t) y j (t).
(7)

The function y j (t) is often modelled [20] as satisfying the following ordinary differential equation:

y ˙ j (t)= a r j S j ( V j ) ( 1 y j ( t ) ) a d j y j (t).

The positive constants a r j and a d j characterize the rise and decay rates, respectively, of the synaptic conductance. Their values depend only on the population of the presynaptic neuron j, i.e. a r j := a r γ and a d j := a d γ , but may vary significantly from one population to the next. For example, gamma-aminobutyric acid (GABA) B synapses are slow to activate and slow to turn off while the reverse is true for GABA A and AMPA synapses [20]. S j ( V j ) denotes the concentration of the transmitter released into the synaptic cleft by a presynaptic spike. We assume that the function S j is sigmoidal and that its exact form depends only upon the population of the neuron j. Its expression is given by (see, e.g. [20]):

S γ ( V j ) = T max γ 1 + e λ γ ( V j V T γ ) .
(8)

Destexhe et al. [23] give some typical values of the parameters T max =1 mM, V T =2 mV and 1/λ=5 mV.

Because of the dynamics of ion channels and of their finite number, similar to the channel noise models derived through the Langevin approximation in the Hodgkin-Huxley model (Equation 2), we assume that the proportion of active channels is actually governed by a stochastic differential equation with diffusion coefficient σ γ (V,y) depending only on the population γ of j of the form (Equation 1):

d y t j = ( a r γ S γ ( V j ) ( 1 y j ( t ) ) a d γ y j ( t ) ) dt+ σ γ y ( V j , y j ) d W t j , y .

In detail, we have

σ γ y ( V j , y j ) = a r γ S γ ( V j ) ( 1 y j ) + a d γ y j χ ( y j ) .
(9)

Remember that the form of the diffusion term guarantees that the solutions to this equation with appropriate initial conditions stay between 0 and 1. The Brownian motions W j , y are assumed to be independent from one neuron to the next.

2.4.2 Electrical synapses

The electrical synapse transmission is rapid and stereotyped and is mainly used to send simple depolarizing signals for systems requiring the fastest possible response. At the location of an electrical synapse, the separation between two neurons is very small (≈3.5 nm). This narrow gap is bridged by the gap junction channels, specialized protein structures that conduct the flow of ionic current from the presynaptic to the postsynaptic cell (see, e.g. [24]).

Electrical synapses thus work by allowing ionic current to flow passively through the gap junction pores from one neuron to another. The usual source of this current is the potential difference generated locally by the action potential. Without the need for receptors to recognize chemical messengers, signaling at electrical synapses is more rapid than that which occurs across chemical synapses, the predominant kind of junctions between neurons. The relative speed of electrical synapses also allows for many neurons to fire synchronously.

We model the current for this type of synapse as

I i j che = J i j (t) ( V i V j ) ,
(10)

where J i j (t) is the maximum conductance.

2.4.3 The maximum conductances

As shown in Equations 6, 7 and 10, we model the current going through the synapse connecting neuron j to neuron i as being proportional to the maximum conductance J i j . Because the synaptic transmission through a synapse is affected by the nature of the environment, the maximum conductances are affected by dynamical random variations (we do not take into account such phenomena as plasticity). What kind of models can we consider for these random variations?

The simplest idea is to assume that the maximum conductances are independent diffusion processes with mean J ¯ α γ N γ and standard deviation σ α γ J N γ , i.e. that depend only on the populations. The quantities J ¯ α γ , being conductances, are positive. We write the following equation:

J i γ (t)= J ¯ α γ N γ + σ α γ J N γ ξ i , γ (t),
(11)

where the ξ i , γ (t), i=1,,N, γ=1,,P, are NP-independent zero mean unit variance white noise processes derived from NP-independent standard Brownian motions B i , γ (t), i.e. ξ i , γ (t)= d B i , γ ( t ) d t , which we also assume to be independent of all the previously defined Brownian motions. The main advantage of this dynamics is its simplicity. Its main disadvantage is that if we increase the noise level σ α γ , the probability that J i j (t) becomes negative increases also: this would result in a negative conductance!

One way to alleviate this problem is to modify the dynamics (Equation 11) to a slightly more complicated one whose solutions do not change sign, such as for instance, the Cox-Ingersoll-Ross model [25] given by:

d J i j (t)= θ α γ ( J ¯ α γ N γ J i j ( t ) ) dt+ σ α γ J N γ J i j ( t ) d B i , γ (t).
(12)

Note that the right-hand side only depends upon the population γ=p(j). Let J i j (0) be the initial condition, it is known [25] that

E [ J i j ( t ) ] = J i j ( 0 ) e θ α γ t + J ¯ α γ N γ ( 1 e θ α γ t ) , Var ( J i j ( t ) ) = J i j ( 0 ) ( σ α γ J ) 2 N γ 2 θ α γ ( e θ α γ t e 2 θ α γ t ) + J ¯ α γ ( σ α γ J ) 2 2 N γ 3 θ α γ ( 1 e θ α γ t ) 2 .

This shows that if the initial condition J i j (0) is equal to the mean J ¯ α γ N γ , the mean of the process is constant over time and equal to J ¯ α γ N γ . Otherwise, if the initial condition J i j (0) is of the same sign as J ¯ α γ , i.e. positive, then the long term mean is J ¯ α γ N γ and the process is guaranteed not to touch 0 if the condition 2 N γ θ α γ J ¯ α γ ( σ α γ J ) 2 holds [25]. Note that the long term variance is J ¯ α γ ( σ α γ J ) 2 2 N γ 3 θ α γ .

2.5 Putting everything together

We are ready to write the equations of a network of Hodgkin-Huxley or FitzHugh-Nagumo neurons and study their properties and their limit, if any, when the number of neurons becomes large. The external current for neuron i has been modelled as the sum of a deterministic part and a stochastic part:

I i ext (t)= I i (t)+ σ ext i d W t i d t .

We will assume that the deterministic part is the same for all neurons in the same population, I i := I α , and that the same is true for the variance, σ ext i := σ ext α . We further assume that the N Brownian motions W t i are N-independent Brownian motions and independent of all the other Brownian motions defined in the model. In other words,

I i ext (t)= I α (t)+ σ ext α d W t i d t ,α=p(i),i=1,,N.
(13)

We only cover the case of chemical synapses and leave it to the reader to derive the equations in the simpler case of gap junctions.

2.5.1 Network of FitzHugh-Nagumo neurons

We assume that the parameters a i , b i and c i in Equation 5 of the adaptation variable w i of neuron i are only functions of the population α=p(i).

Simple maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 11, the state of the i th neuron in a fully connected network of FitzHugh-Nagumo neurons with chemical synapses is determined by the variables ( V i , w i , y i ) that satisfy the following set of 3N stochastic differential equations:

{ d V t i = ( V t i ( V t i ) 3 3 w t i + I α ( t ) ) d t d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J ¯ α γ ( V t i V rev α γ ) y t j ) d t d V t i = γ = 1 P 1 N γ ( j , p ( j ) = γ σ α γ J ( V t i V rev α γ ) y t j ) d B t i , γ d V t i = + σ ext α d W t i , d w t i = c α ( V t i + a α b α w t i ) d t , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y .
(14)
S α ( V t i )

is given by Equation 8; σ α y , by Equation 9; and W t i , y , i=1,,N, are N-independent Brownian processes that model noise in the process of transmitter release into the synaptic clefts.

Sign-preserving maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 12, the situation is slightly more complicated. In effect, the state space of the neuron i has to be augmented by the P maximum conductances J i γ , γ=1,,P. We obtain

{ d V t i = ( V t i ( V t i ) 3 3 w t i + I α ( t ) ) d t d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J i j ( t ) ( V t i V rev α γ ) y t j ) d t d V t i = + σ ext α d W t i , d w t i = c α ( V t i + a α b α w t i ) d t , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y , d J i γ ( t ) = θ α γ ( J ¯ α γ N γ J i γ ( t ) ) d t + σ α γ J N γ J i γ ( t ) d B i , γ ( t ) , γ = 1 , , P ,
(15)

which is a set of N(P+3) stochastic differential equations.

2.5.2 Network of Hodgkin-Huxley neurons

We provide a similar description in the case of the Hodgkin-Huxley neurons. We assume that the functions ρ x i and ζ x i , x{n,m,h}, that appear in Equation 2 only depend upon α=p(i).

Simple maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 11, the state of the i th neuron in a fully connected network of Hodgkin-Huxley neurons with chemical synapses is therefore determined by the variables ( V i , n i , m i , h i , y i ) that satisfy the following set of 5N stochastic differential equations:

{ C d V t i = ( I α ( t ) g K ¯ n i 4 ( V t i E K ) g Na ¯ m i 3 h i ( V t i E Na ) g L ¯ ( V t i E L ) ) d t C d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J ¯ α γ ( V t i V rev α γ ) y t j ) d t C d V t i = γ = 1 P 1 N γ ( j , p ( j ) = γ σ α γ J ( V t i V rev α γ ) y t j ) d B t i , γ C d V t i = + σ ext α d W t i , d x i ( t ) = ( ρ x α ( V i ) ( 1 x i ) ζ x ( V i ) x i ) d t + σ x ( V i , x i ) d W t x , i , x { n , m , h } , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y .
(16)

Sign-preserving maximum conductance variation. If we assume that the maximum conductances fluctuate according to Equation 12, we use the same idea as in the FitzHugh-Nagumo case of augmenting the state space of each individual neuron and obtain the following set of (5+P)N stochastic differential equations:

{ C d V t i = ( I α ( t ) g K ¯ n i 4 ( V t i E K ) g N a ¯ m i 3 h i ( V t i E Na ) g L ¯ ( V t i E L ) ) d t C d V t i = ( γ = 1 P 1 N γ j , p ( j ) = γ J i j ( t ) ( V t i V rev α γ ) y t j ) d t C d V t i = + σ ext α d W t i , d x i ( t ) = ( ρ x α ( V t i ) ( 1 x i ) ζ x α ( V t i ) x i ) d t + σ x ( V t i , x i ) d W t x , i , x { n , m , h } , d y t i = ( a r α S α ( V t i ) ( 1 y t i ) a d α y t i ) d t + σ α y ( V t i , y t i ) d W t i , y , d J i γ ( t ) = θ α γ ( J ¯ α γ N γ J i γ ( t ) ) d t + σ α γ J N γ J i γ ( t ) d B i , γ ( t ) , γ = 1 , , P .
(17)

2.5.3 Partial conclusion

Equations 14 to 17 have a quite similar structure. They are well-posed, i.e. given any initial condition, and any time T>0, they have a unique solution on [0,T] which is square-integrable. A little bit of care has to be taken when choosing these initial conditions for some of the parameters, i.e. n, m and h, which take values between 0 and 1, and the maximum conductances when one wants to preserve their signs.

In order to prepare the grounds for the ‘Mean-field equations for conductance-based models’ section, we explore a bit more the aforementioned common structure. Let us first consider the case of the simple maximum conductance variations for the FitzHugh-Nagumo network. Looking at Equation 14, we define the three-dimensional state vector of neuron i to be X t i =( V t i , w t i , y t i ). Let us now define f α :R× R 3 R 3 , α=1,,P, by

f α ( t , X t i ) = [ V t i ( V t i ) 3 3 w t i + I α ( t ) c α ( V t i + a α b α w t i ) a r α S α ( V t i ) ( 1 y t i ) a d α y t i ] .

Let us next define g α :R× R 3 R 3 × 2 by

g α ( t , X t i ) = [ σ ext α 0 0 0 0 σ α y ( V t i , y t i ) ] .

It appears that the intrinsic dynamics of the neuron i is conveniently described by the equation

d X t i = f α ( t , X t i ) dt+ g α ( t , X t i ) [ d W t i d W t i , y ] .

We next define the functions b α γ : R 3 × R 3 R 3 , for α,γ=1,,P, by

b α γ ( X t i , X t j ) = [ J ¯ α γ ( V t i V rev α γ ) y t j 0 0 ]

and the function β α γ : R 3 × R 3 R 3 × 1 by

β α γ ( X t i , X t j ) = [ σ α γ J ( V t i V rev α γ ) y t j 0 0 ] .

It appears that the full dynamics of the neuron i, corresponding to Equation 14, can be described compactly by

d X t i = f α ( t , X t i ) d t + g α ( t , X t i ) [ d W t i d W t i , y ] + γ = 1 P 1 N γ j , p ( j ) = γ b α γ ( X t i , X t j ) d t + γ = 1 P 1 N γ j , p ( j ) = γ β α γ ( X t i , X t j ) d B t i , γ .
(18)

Let us now move to the case of the sign-preserving variation of the maximum conductances, still for the FitzHugh-Nagumo neurons. The state of each neuron is now P+3-dimensional: we define X t i =( V t i , w t i , y t i , J i 1 (t),, J i P (t)). We then define the functions f α :R× R P + 3 R P + 3 , α=1,,P, by

f α ( t , X t i ) = [ V t i ( V t i ) 3 3 w t i + I α ( t ) c α ( V t i + a α b α w t i ) a r α S α ( V t i ) ( 1 y t i ) a d α y t i θ α γ ( J ¯ α γ N γ J i γ ( t ) ) , γ = 1 , , P ]

and the functions g α :R× R P + 3 R ( P + 3 ) × ( P + 2 ) by

g α ( t , X t i ) = [ σ ext α 0 0 0 0 0 0 0 0 σ α y ( V t i , y t i ) 0 0 0 0 σ α 1 J N 1 J i 1 ( t ) 0 0 0 0 σ α P J N P J i P ( t ) ] .

It appears that the intrinsic dynamics of the neuron i isolated from the other neurons is conveniently described by the equation

d X t i = f α ( t , X t i ) dt+ g α ( t , X t i ) [ d W t i d W t i , y d B t i , 1 d B t i , P ] .

Let us finally define the functions b α γ : R P + 3 × R P + 3 R P + 3 , for α,γ=1,,P, by

b α γ ( X t i , X t j ) = [ J i j ( t ) ( V t i V rev α γ ) y t j 0 0 ] .

It appears that the full dynamics of the neuron i, corresponding to Equation 15 can be described compactly by

d X t i = f α ( t , X t i ) d t + g α ( t , X t i ) [ d W t i d W t i , y d B t i , 1 d B t i , P ] + γ = 1 P 1 N γ j , p ( j ) = γ b α γ ( X t i , X t j ) d t .
(19)

We let the reader apply the same machinery to the network of Hodgkin-Huxley neurons.

Let us note d as the positive integer equal to the dimension of the state space in Equation 18 (d=3) or 19 (d=3+P) or in the corresponding cases for the Hodgkin-Huxley model (d=5 and d=5+P). The reader will easily check that the following four assumptions hold for both models:

(H1) Locally Lipschitz dynamics: For all α{1,,P}, the functions f α and g α are uniformly locally Lipschitz continuous with respect to the second variable. In detail, for all U>0, there exists K U >0 independent of t[0,T] such that for all x,y B U d , the ball of R d of radius U:

f α (t,x) f α (t,y)+ g α (t,x) g α (t,y) K U xy,α=1,,P.

(H2) Locally Lipschitz interactions: For all α,γ{1,,P}, the functions b α γ and β α γ are locally Lipschitz continuous. In detail, for all U>0, there exists L U >0 such that for all x,y, x , y B U d , we have:

b α γ ( x , y ) b α γ ( x , y ) + β α γ ( x , y ) β α γ ( x , y ) L U ( x x + y y ) .

(H3) Linear growth of the interactions: There exists a K ˜ >0 such that

max ( b α γ ( x , z ) 2 , β α γ ( x , z ) 2 ) K ˜ ( 1 + x 2 ) .

(H4) Monotone growth of the dynamics: We assume that f α and g α satisfy the following monotonous condition for all α=1,,P:

x T f α (t,x)+ 1 2 g α ( t , x ) 2 K ( 1 + x 2 ) .
(20)

These assumptions are central to the proofs of Theorems 2 and 4.

They imply the following proposition stating that the system of stochastic differential equations (Equation 19) is well-posed:

Proposition 1 Let T>0 be a fixed time. If | I α (t)| I m on [0,T], for α=1,,P, Equations  18 and 19 together with an initial condition X 0 i L 2 ( R d ), i=1,,N of square-integrable random variables, have a unique strong solution which belongs to L 2 ([0,T]; R d N ).

Proof The proof uses Theorem 3.5 in chapter 2 in [26] whose conditions are easily shown to follow from hypotheses 2.5.3 to (H2). □

The case N=1 implies that Equations 2 and 5, describing the stochastic FitzHugh-Nagumo and Hodgkin-Huxley neurons, are well-posed.

We are interested in the behavior of the solutions of these equations as the number of neurons tends to infinity. This problem has been long-standing in neuroscience, arousing the interest of many researchers in different domains. We discuss the different approaches developed in the field in the next subsection.

2.6 Mean-field methods in computational neuroscience: a quick overview

Obtaining the equations of evolution of the effective mean-field from microscopic dynamics is a very complex problem. Many approximate solutions have been provided, mostly based on the statistical physics literature.

Many models describing the emergent behavior arising from the interaction of neurons in large-scale networks have relied on continuum limits ever since the seminal work of Amari, and Wilson and Cowan [2730]. Such models represent the activity of the network by macroscopic variables, e.g. the population-averaged firing rate, which are generally assumed to be deterministic. When the spatial dimension is not taken into account in the equations, they are referred to as neural masses, otherwise as neural fields. The equations that relate these variables are ordinary differential equations for neural masses and integrodifferential equations for neural fields. In the second case, they fall in a category studied in [31] or can be seen as ordinary differential equations defined on specific functional spaces [32]. Many analytical and numerical results have been derived from these equations and related to cortical phenomena, for instance, for the problem of spatio-temporal pattern formation in spatially extended models (see, e.g. [3336]). The use of bifurcation theory has also proven to be quite powerful [37, 38]. Despite all its qualities, this approach implicitly makes the assumption that the effect of noise vanishes at the mesoscopic and macroscopic scales and hence that the behavior of such populations of neurons is deterministic.

A different approach has been to study regimes where the activity is uncorrelated. A number of computational studies on the integrate-and-fire neuron showed that under certain conditions, neurons in large assemblies end up firing asynchronously, producing null correlations [3941]. In these regimes, the correlations in the firing activity decrease towards zero in the limit where the number of neurons tends to infinity. The emergent global activity of the population in this limit is deterministic and evolves according to a mean-field firing rate equation. However, according to the theory, these states only exist in the limit where the number of neurons is infinite, thereby raising the question of how the finiteness of the number of neurons impacts the existence and behavior of asynchronous states. The study of finite-size effects for asynchronous states is generally not reduced to the study of mean firing rates and can include higher order moments of firing activity [4244]. In order to go beyond asynchronous states and take into account the stochastic nature of the firing and understand how this activity scales as the network size increases, different approaches have been developed, such as the population density method and related approaches [45]. Most of these approaches involve expansions in terms of the moments of the corresponding random variables, and the moment hierarchy needs to be truncated which is not a simple task that can raise a number of difficult technical issues (see, e.g. [46]).

However, increasingly many researchers now believe that the different intrinsic or extrinsic noise sources are part of the neuronal signal, and rather than being a pure disturbing effect related to the intrinsically noisy biological substrate of the neural system, they suggest that noise conveys information that can be an important principle of brain function [47]. At the level of a single cell, various studies have shown that the firing statistics are highly stochastic with probability distributions close to the Poisson distributions [48], and several computational studies confirmed the stochastic nature of single-cell firings [4951]. How the variability at the single-neuron level affects the dynamics of cortical networks is less well established. Theoretically, the interaction of a large number of neurons that fire stochastic spike trains can naturally produce correlations in the firing activity of the population. For instance, power laws in the scaling of avalanche-size distributions has been studied both via models and experiments [5255]. In these regimes, the randomness plays a central role.

In order to study the effect of the stochastic nature of the firing in large networks, many authors strived to introduce randomness in a tractable form. Some of the models proposed in the area are based on the definition of a Markov chain governing the firing dynamics of the neurons in the network, where the transition probability satisfies a differential equation, the master equation. Seminal works of the application of such modeling for neuroscience date back to the early 1990s and have been recently developed by several authors [43, 5659]. Most of these approaches are proved correct in some parameter regions using statistical physics tools such as path integrals and Van-Kampen expansions, and their analysis often involve a moment expansion and truncation. Using a different approach, a static mean-field study of multi-population network activity was developed by Treves in [60]. This author did not consider external inputs but incorporated dynamical synaptic currents and adaptation effects. His analysis was completed in [39], where the authors proved, using a Fokker-Planck formalism, the stability of an asynchronous state in the network. Later on, Gerstner in [61] built a new approach to characterize the mean-field dynamics for the spike response model, via the introduction of suitable kernels propagating the collective activity of a neural population in time. Another approach is based on the use of large deviation techniques to study large networks of neurons [62]. This approach is inspired by the work on spin-glass dynamics, e.g. [63]. It takes into account the randomness of the maximum conductances and the noise at various levels. The individual neuron models are rate models, hence already mean-field models. The mean-field equations are not rigorously derived from the network equations in the limit of an infinite number of neurons, but they are shown to have a unique, non-Markov solution, i.e. with infinite memory, for each initial condition.

Brunel and Hakim considered a network of integrate-and-fire neurons connected with constant maximum conductances [41]. In the case of sparse connectivity, stationarity, and in a regime where individual neurons emit spikes at a low rate, they were able to analytically study the dynamics of the network and to show that it exhibits a sharp transition between a stationary regime and a regime of fast collective oscillations weakly synchronized. Their approach was based on a perturbative analysis of the Fokker-Planck equation. A similar formalism was used in [44] which, when complemented with self-consistency equations, resulted in the dynamical description of the mean-field equations of the network and was extended to a multi population network. Finally, Chizhov and Graham [64] have recently proposed a new method based on a population density approach allowing to characterize the mesoscopic behavior of neuron populations in conductance-based models.

Let us finish this very short and incomplete survey by mentioning the work of Sompolinsky and colleagues. Assuming a linear intrinsic dynamics for the individual neurons described by a rate model and random centered maximum conductances for the connections, they showed [65, 66] that the system undergoes a phase transition between two different stationary regimes: a ‘trivial’ regime where the system has a unique null and uncorrelated solution, and a ‘chaotic’ regime in which the firing rate converges towards a non-zero value and correlations stabilize on a specific curve which they were able to characterize.

All these approaches have in common that they are not based on the most widely accepted microscopic dynamics (such as the ones represented by the Hodgkin-Huxley equations or some of their simplifications) and/or involve approximations or moment closures. Our approach is distinct in that it aims at deriving rigorously and without approximations the mean-field equations of populations of neurons whose individual neurons are described by biological, if not correct at least plausible, representations. The price to pay is the complexity of the resulting mean-field equations. The specific study of their solutions is therefore a crucial step, which will be developed in forthcoming papers.

3 Mean-field equations for conductance-based models

In this section, we give a general formulation of the neural network models introduced in the previous section and use it in a probabilistic framework to address the problem of the asymptotic behavior of the networks, as the number of neurons N goes to infinity. In other words, we derive the limit in law of N-interacting neurons, each of which satisfying a nonlinear stochastic differential equation of the type described in the ‘Spiking conductance-based models’ section. In the remainder of this section, we work in a complete probability space (Ω,F,P) satisfying the usual conditions and endow with a filtration ( F t ) t .

3.1 Setting of the problem

We recall that the neurons in the network fall into different populations P. The populations differ through the intrinsic properties of their neurons and the input they receive. We assume that the number of neurons in each population α{1,,P}, denoted by N α , increases as the network size increases and moreover that the asymptotic proportion of neurons in population α is nontrivial, i.e. N α /N λ α (0,1) as N goes to infinityFootnote 2

We use the notations introduced in the ‘Partial conclusion’ section, and the reader should refer to this section to give a concrete meaning to the rather abstract (but required by the mathematics) setting that we now establish.

Each neuron i in population α is described by a state vector noted as X t i , N in R d and has an intrinsic dynamics governed by a drift function f α :R× R d R d and a diffusion matrix g α :R× R d R d × m assumed uniformly locally Lipschitz continuous with respect to the second variable. For a neuron i in population α, the dynamics of the d-dimensional process ( X t i ) governing the evolution of the membrane potential and additional variables (adaptation, ionic concentrations), when there is no interaction, is governed by the equation:

d X t i , N = f α ( t , X t i , N ) dt+ g α ( t , X t i , N ) d W t i .

Moreover, we assume, as it is the case for all the models described in the ‘Spiking conductance-based models’ section, that the solutions of this stochastic differential equation exist for all time.

When included in the network, these processes interact with those of all the other neurons through a set of continuous functions that only depend on the population α=p(i), the neuron i belongs to and the populations γ of the presynaptic neurons. These functions, b α γ (x,y): R d × R d R d , are scaled by the coefficients 1/ N γ , so the maximal interaction is independent of the size of the network (in particular, neither diverging nor vanishing as N goes to infinity).

As discussed in the ‘Spiking conductance-based models’ section, due to the stochastic nature of ionic currents and the noise effects linked with the discrete nature of charge carriers, the maximum conductances are perturbed dynamically through the N×P-independent Brownian motions B t i , α of dimension δ that were previously introduced. The interaction between the neurons and the noise term is represented by the function β α γ : R d × R d R d × δ .

In order to introduce the stochastic current and stochastic maximum conductances, we define two independent sequences of independent m- and δ-dimensional Brownian motions noted as ( W t i ) i N and ( B t i α ) i N , α { 1 P } which are adapted to the filtration F t .

The resulting equation for the i th neuron, including the noisy interactions, reads:

d X t i , N = f α ( t , X t i , N ) d t + γ = 1 P 1 N γ j , p ( j ) = γ b α γ ( X t i , N , X t j , N ) d t + g α ( t , X t i , N ) d W t i + γ = 1 P 1 N γ j , p ( j ) = γ β α γ ( X t i , N , X t j , N ) d B t i γ .
(21)

Note that this implies that X i , N and X j , N have the same law whenever p(i)=p(j), given identically distributed initial conditions.

These equations are similar to the equations studied in another context by a number of mathematicians, among which are McKean, Tanaka and Sznitman (see the ‘Introduction’ section), in that they involve a very large number of particles (here, particles are neurons) in interaction. Motivated by the study of the McKean-Vlasov equations, these authors studied special cases of equations (Equation 21). This theory, referred to as the kinetic theory, is chiefly interested in the study of the thermodynamics questions. They show the property that in the limit where the number of particles tends to infinity, provided that the initial state of each particle is drawn independently from the same law, each particle behaves independently and has the same law, which is given by an implicit stochastic equation. They also evaluate the fluctuations around this limit under diverse conditions [1, 2, 6, 7, 911]. Some extensions to biological problems where the drift term is not globally Lipschitz but satisfies the monotone growth condition (Equation 20) were studied in [67]. This is the approach we undertake here.

3.2 Convergence of the network equations to the mean-field equations and properties of those equations

We now show that the same type of phenomena that were predicted for systems of interacting particles happen in networks of neurons. In detail, we prove that in the limit of large populations, the network displays the property of propagation of chaos. This means that any finite number of diffusion processes become independent, and all neurons belonging to a given population α have asymptotically the same probability distribution, which is the solution of the following mean-field equation:

d X ¯ t α = f α ( t , X ¯ t α ) d t + γ = 1 P E Z ¯ [ b α γ ( X ¯ t α , Z ¯ t γ ) ] d t + g α ( t , X ¯ t α ) d W t α + γ = 1 P E Z ¯ [ β α γ ( X ¯ t α , Z ¯ t γ ) ] d B t α γ , α = 1 , , P ,
(22)

where Z ¯ is a process independent of X ¯ that has the same law, and E Z ¯ denotes the expectation under the law of Z ¯ . In other words, the mean-field equation can be written, denoting by d m t γ (z) the law of Z ¯ t γ (hence, also of X ¯ t γ ):

d X ¯ t α = f α ( t , X ¯ t α ) d t + γ = 1 P ( R d b α γ ( X ¯ t α , z ) d m t γ ( z ) ) d t + g α ( t , X ¯ t α ) d W t α + γ = 1 P ( R d β α γ ( X ¯ t α , z ) d m t γ ( z ) ) d B t α γ .
(23)

In these equations, W t α , for α=1P, are independent, standard, m-dimensional Brownian motions. Let us point out the fact that the right-hand side of Equations 22 and 23 depends on the law of the solution; this fact is sometimes referred to as ‘the process X ¯ is attracted by its own law’. This equation is also classically written as the McKean-Vlasov-Fokker-Planck equation on the probability distribution p of the solution. This equation which we use in the ‘Numerical simulations’ section can be easily derived from Equation 22. Let p α (t,z), z=( z 1 ,, z d ), be the probability density at time t of the solution X ¯ t α to Equation 22 (this is equivalent to d m t α (z)= p α (t,z)dz), then we have:

t p α ( t , z ) = div z ( ( f α ( t , z ) + γ = 1 P b α γ ( z , y ) p γ ( t , y ) d y ) p α ( t , z ) ) + 1 2 i , j = 1 d 2 z i z j ( D i j α ( z ) p α ( t , z ) ) , α = 1 , , P ,
(24)

where the d×d matrix D α is given by

D α (z)= γ = 1 P E Z [ β α γ ( z , Z ) ] E Z T [ β α γ ( z , Z ) ] + g α (t,z) g α T (t,z)

with

E Z [ β α γ ( z , Z ) ] = β α γ (z,y) p γ (t,y)dy.

The P equations (Equation 24) yield the probability densities of the solutions X ¯ t α of the mean-field equations (Equation 22). Because of the propagation of chaos result, the X ¯ t α are statistically independent, but their probability functions are clearly functionally dependent.

We now spend some time on notations in order to obtain a somewhat more compact form of Equation 22. We define X ¯ t to be the dP-dimensional process X ¯ t =( X ¯ t α ;α=1P). We similarly define f, g, b and β as the concatenations of functions f α , g α , b α , β and β α , γ , respectively. In details, f(t, X ¯ )=( f α (t, X ¯ t α );α=1P), b(X,Y)=( γ = 1 P b α γ ( X α , Y γ );α=1P) and W=( W α ;α=1P). The term of noisy synaptic interactions requires a more careful treatment. We define β=( β α γ ;α,γ=1P) ( R d × δ ) P × P and B=( B α γ ;α,γ=1P) ( R δ ) P × P , and the product of an element M ( R d × δ ) P × P and an element X ( R δ ) P × P as the element of ( R d ) P :

( M X ) α = γ M α γ X α γ .

We obtain the equivalent compact mean-field equation:

d X ¯ t = ( f ( t , X ¯ t ) + E Z ¯ [ b ( X ¯ t , Z ¯ t ) ] ) d t + g ( t , X ¯ t ) d W t + E Z ¯ [ β ( X ¯ t , Z ¯ t ) ] d B t .
(25)

Equations 22 and 24 are implicit equations on the law of X ¯ t .

We now state the main theoretical results of the paper as two theorems. The first theorem is about the well-posedness of the mean-field equation (Equation 22). The second is about the convergence of the solutions of the network equations to those of the mean-field equations. Since the proof of the second theorem involves similar ideas to those used in the proof of the first, it is given in the Appendix.

Theorem 2 Under assumptions (H1) to (H4), there exists a unique solution to the mean-field equation (Equation  22) on [0,T] for any T>0.

Let us denote by M(C) the set of probability distributions on C the set continuous functions [0,T] ( R d ) P , and M 2 (C) the space of square-integrable processes. Let ( W α ;α=1P) (respectively, ( B α γ ;α,γ=1P)) also be a family of P (respectively, P 2 )-independent, m (respectively δ)-dimensional, adapted standard Brownian motions on (Ω,F,P). Let us also note X 0 M ( R d ) P as the (random) initial condition of the mean-field equation. We introduce the map Φ acting on stochastic processes and defined by:

Φ:{ M ( C ) M ( C ) , X ( Y t = { Y t α , α = 1 P } ) t with Y t α = X 0 α + 0 t ( f α ( s , X s α ) + γ = 1 P E Z [ b α γ ( X s α , Z s γ ) ] ) d s Y t α = + 0 t g α ( s , X s α ) d W s α Y t α = + γ = 1 P 0 t E Z [ β α γ ( X s α , Z s γ ) ] d B s α γ , α = 1 , , P .

We have introduced in the previous formula the process Z t with the same law as and independent of X t . There is a trivial identification between the solutions of the mean-field equation (Equation 22) and the fixed points of the map Φ: any fixed point of Φ provides a solution for Equation 22, and conversely, any solution of Equation 22 is a fixed point of Φ.

The following lemma is useful to prove the theorem:

Lemma 3 Let X 0 L 2 ( ( R d ) P ) be a square-integrable random variable. Let X be a solution of the mean-field equation (Equation  22) with initial condition X 0 . Under assumptions (H3) and (H4), there exists a constant C(T)>0 depending on the parameters of the system and on the horizon T, such that:

E [ X t 2 ] C(T),t[0,T].

Proof Using the Itô formula for X t 2 , we have:

X t 2 = X 0 2 + 2 0 t ( X s T f ( s , X s ) + 1 2 g ( s , X s ) 2 + X s T E Z [ b ( X s , Z s ) ] + 1 2 E Z [ β ( X s , Z s ) ] 2 ) d s + N t ,

where N t is a stochastic integral, hence with a null expectation, E[ N t ]=0.

This expression involves the term x T b(x,z). Because of assumption (H3), we clearly have:

| x T b(x,z)|x b ( x , z ) x K ˜ ( 1 + x 2 ) K ˜ ( 1 + x 2 ) .

It also involves the term x T f(t,x)+ 1 2 g ( t , x ) 2 which, because of assumption (H4), is upperbounded by K(1+ x 2 ). Finally, assumption (H3) again allows us to upperbound the term 1 2 E Z [ β ( X s , Z s ) ] 2 by K ˜ 2 (1+ X s 2 ).

Finally, we obtain

E [ 1 + X t 2 ] E [ 1 + X 0 2 ] +2 ( K + K ˜ 2 + K ˜ ) 0 t E [ 1 + X s 2 ] ds.

Using Gronwall’s inequality, we deduce the L 2 boundedness of the solutions of the mean-field equations. □

This lemma puts us in a position to prove the existence and uniqueness theorem:

Proof We start by showing the existence of solutions and then prove the uniqueness property. We recall that by the application of Lemma 3, the solutions will all have bounded second-order moment.

Existence. Let X 0 =( X t 0 ={ X t 0 α ,α=1P})M(C) be a given stochastic process, and define the sequence of probability distributions ( X k ) k 0 on M(C) defined by induction by X k + 1 =Φ( X k ). Define also a sequence of processes Z k , k0, independent of the sequence of processes X k and having the same law. We note this as ‘X and Z i.i.d.’ below. We stop the processes at the time τ U k the first hitting time of the norm of X k to the constant value U. For convenience, we will make an abuse of notation in the proof and denote X t k = X t τ U k k . This implies that X t k belongs to B U d , the ball of radius U centered at the origin in R d , for all times t[0,T].

Using the notations introduced for Equation 25, we decompose the difference X t k + 1 X t k as follows:

X t k + 1 X t k = 0 t ( f ( s , X s k ) f ( s , X s k 1 ) ) d s A t + 0 t E Z [ b ( X s k , Z s k ) b ( X s k 1 , Z s k 1 ) ] d s B t + 0 t ( g ( s , X s k ) g ( s , X s k 1 ) ) d W s C t + 0 t E Z [ β ( X s k , Z s k ) β ( X s k 1 , Z s k 1 ) ] d B s D t

and find an upperbound for M t k :=E[ sup s t X s k + 1 X s k 2 ] by finding upperbounds for the corresponding norms of the four terms A t , B t , C t and D t . Applying the discrete Cauchy-Schwartz inequality, we have:

X t k + 1 X t k 2 4 ( A t 2 + B t 2 + C t 2 + D t 2 )

and treat each term separately. The upperbounds for the first two terms are obtained using the Cauchy-Schwartz inequality, those of the last two terms using the Burkholder-Davis-Gundy martingale moment inequality.

The term A t is easily controlled using the Cauchy-Schwarz inequality and the use of assumption (H1):

A s 2 K U 2 T 0 s X u k X u k 1 2 du.

Taking the sup of both sides of the last inequality, we obtain

sup s t A s 2 K U 2 T 0 t X s k X s k 1 2 ds K U 2 T 0 t sup u s X u k X u k 1 2 ds,

from which follows the fact that

E [ sup s t A s 2 ] K U 2 T 0 t E [ sup u s X u k X u k 1 2 ] ds.

The term B t is controlled using the Cauchy-Schwartz inequality, assumption (H2), and the fact that the processes X and Z are independent with the same law:

B s 2 2T L U 2 0 s ( X u k X u k 1 2 + E [ X u k X u k 1 2 ] ) du.

Taking the sup of both sides of the last inequality, we obtain

sup s t B s 2 2T L U 2 0 t ( sup u s X u k X u k 1 2 + E [ sup u s X u k X u k 1 2 ] ) ds,

from which follows the fact that

E [ sup s t B s 2 ] 4T L U 2 0 t E [ sup u s X u k X u k 1 2 ] ds.

The term C t is controlled using the fact that it is a martingale and applying the Burkholder-Davis-Gundy martingale moment inequality and assumption (H1):

E [ sup s t C s 2 ] 4 K U 2 0 t E [ sup u s X u k X u k 1 2 ] ds.

The term D t is also controlled using the fact that it is a martingale and applying the Burkholder-Davis-Gundy martingale moment inequality and assumption (H2):

E [ sup s t D t 2 ] 16 L U 2 0 t E [ sup u s X u k X u k 1 2 ] ds.

Putting all of these together, we get:

E [ sup s t X s k + 1 X s k 2 ] 4 ( T + 4 ) ( K U 2 + 4 L U 2 ) 0 t E [ sup u s X u k X u k 1 2 ] d s .
(26)

From the relation M t k K 0 t M s k 1 ds with K =4(T+4)( K U 2 +4 L U 2 ), we get by an immediate recursion:

M t k ( K ) k 0 t 0 s 1 0 s k 1 M s k 0 d s 1 d s k ( K ) k t k k ! M T 0
(27)

and M T 0 is finite because the processes are bounded. The Bienaymé-Tchebychev inequality and Equation 27 now give

P ( sup s t X s k + 1 X s k 2 > 1 2 2 ( k + 1 ) ) 4 ( 4 K t ) k k ! M T 0

and this upper bound is the term of a convergent series. The Borel-Cantelli lemma stems that for almost any ωΩ, there exists a positive integer k 0 (ω) (ω denotes an element of the probability space Ω) such that

sup s t X s k + 1 X s k 2 1 2 2 ( k + 1 ) ,k k 0 (ω)

and hence

sup s t X s k + 1 X s k 1 2 k + 1 ,k k 0 (ω).

It follows that with probability 1, the partial sums:

X t 0 + k = 0 n ( X t k + 1 X t k ) = X t n

are uniformly (in t[0,T]) convergent. Denote the thus defined limit by X ¯ t . It is clearly continuous and F t -adapted. On the other hand, the inequality (Equation 27) shows that for every fixed t, the sequence { X t n } n 1 is a Cauchy sequence in L 2 . Lemma 3 shows that X ¯ M 2 (C).

It is easy to show using routine methods that X ¯ indeed satisfies Equation 22.

To complete the proof, we use a standard truncation property. This method replaces the function f by the truncated function:

f U (t,x)={ f ( t , x ) , x U f ( t , U x / x ) , x > U

and similarly for g. The functions f U and g U are globally Lipchitz continuous; hence, the previous proof shows that there exists a unique solution X ¯ U to equations (Equation 22) associated with the truncated functions. This solution satisfies the equation

X ¯ U ( t ) = X 0 + 0 t ( f U ( t , X ¯ U ( s ) ) + E Z ¯ [ b ( X ¯ U ( s ) , Z ¯ s ) ] ) d s + 0 t g U ( t , X ¯ U ( s ) ) d W s + 0 t E Z ¯ [ β ( X ¯ U ( s ) , Z ¯ s ) ] d B s , t [ 0 , T ] .
(28)

Let us now define the stopping time as

τ U =inf { t [ 0 , T ] , X ¯ U ( t ) U } .

It is easy to show that

X ¯ U (t)= X ¯ U (t)if 0t τ U , U U,
(29)

implying that the sequence of stopping times τ U is increasing. Using Lemma 3 which implies that the solution to Equation 22 is almost surely bounded, for almost all ωΩ, there exists U 0 (ω) such that τ U =T for all U U 0 . Now, define X ¯ (t)= X ¯ U 0 (t), t[0,T]. Because of Equation 29, we have X ¯ (t τ U )= X ¯ U (t τ U ), and it follows from Equation 28 that

X ¯ ( t τ U ) = X 0 + 0 t τ U ( f U ( s , X ¯ s ) + E Z ¯ [ b ( X ¯ s , Z ¯ s ) ] ) d s + 0 t τ U g U ( s , X ¯ s ) d W s + 0 t τ U E Z ¯ [ β ( X ¯ U ( s ) , Z ¯ s ) ] d B s = X 0 + 0 t τ U ( f ( s , X ¯ s ) + E Z ¯ [ b ( X ¯ s , Z ¯ s ) ] ) d s + 0 t τ U g ( s , X ¯ s ) d W s + 0 t τ U E Z ¯ [ β ( X ¯ U ( s ) , Z ¯ s ) ] d B s ,

and letting U, we have shown the existence of solution to Equation 22 which, by Lemma 3, is square-integrable.

Uniqueness. Assume that X and Y are two solutions of the mean-field equations (Equation 22). From Lemma 3, we know that both solutions are in M 2 (C). Moreover, using the bound Equation 26, we directly obtain the inequality:

E [ sup s t X s Y s 2 ] K 0 t E [ sup u s X u Y u 2 ] ds

which, by Gronwall’s theorem, directly implies that

E [ sup s t X s Y s 2 ] =0

which ends the proof. □

We have proved the well-posedness of the mean-field equations. It remains to show that the solutions to the network equations converge to the solutions of the mean-field equations. This is what is achieved in the next theorem.

Theorem 4 Under assumptions (H1) to (H4), the following holds true:

  • ConvergenceFootnote 3: For each neuron i of population α, the law of the multidimensional process X i , N converges towards the law of the solution of the mean-field equation related to population α, namely X ¯ α .

  • Propagation of chaos: For anyk N , and any k-tuple( i 1 ,, i k ), the law of the process( X t i 1 , N ,, X t i n , N ,tT)converges towardsFootnote 4 m t p ( i 1 ) m t p ( i n ) , i.e. the asymptotic processes have the law of the solution of the mean-field equations and are all independent.

This theorem has important implications in neuroscience that we discuss in the ‘Discussion and conclusion’ section. Its proof is given in the Appendix.

4 Numerical simulations

At this point, we have provided a compact description of the activity of the network when the number of neurons tends to infinity. However, the structure of the solutions of these equations is complicated to understand from the implicit mean-field equations (Equation 22) and of their variants (such as the McKean-Vlasov-Fokker-Planck equations (Equation 24)). In this section, we present some classical ways to numerically approximate the solutions to these equations and give some indications about the rate of convergence and the accuracy of the simulation. These numerical schemes allow us to compute and visualize the solutions. We then compare the results of the two schemes for a network of FitzHugh-Nagumo neurons belonging to a single population and show their good agreement.

The main difficulty one faces when developing numerical schemes for Equations 22 and 24 is that they are non-local. By this, we mean that in the case of the McKean-Vlasov equations, they contain the expectation of a certain function under the law of the solution to the equations (see Equation 22). In the case of the corresponding Fokker-Planck equation, it contains integrals of the probability density functions which is a solution to the equation (see Equation 24).

4.1 Numerical simulations of the McKean-Vlasov equations

The fact that the McKean-Vlasov equations involve an expectation of a certain function under the law of the solution of the equation makes them particularly hard to simulate directly. One is often reduced to use Monte Carlo simulations to compute this expectation, which amounts to simulating the solution of the network equations themselves (see [68]). This is the method we used. In its simplest fashion, it consists of a Monte Carlo simulation where one numerically solves the N network equations (Equation 21) with the classical Euler-Maruyama method a number of times with different initial conditions, and averages the trajectories of the solutions over the number of simulations.

In detail, let Δt>0 and N N . The discrete-time dynamics implemented in the stochastic numerical simulations consists of simulating M times a P-population discrete-time process ( X ˜ n i ,nT/Δt,i=1N), solution of the recursion, for i in population α:

X ˜ n + 1 i , r = X ˜ n i , r + Δ t { f α ( t , X ˜ n i , r ) d t + γ = 1 P 1 N γ j = 1 , p ( j ) = γ N γ b α γ ( X ˜ n i , r , X ˜ n j , r ) } + Δ t { g α ( t , X ˜ n i , r ) ξ n + 1 i , r + γ = 1 P 1 N γ j = 1 , p ( j ) = γ N γ β α γ ( X ˜ n i , r , X ˜ n j , r ) ζ n + 1 i γ } ,
(30)

where ξ n i , r and ζ n i γ , r are independent d- and δ-dimensional standard normal random variables. The initial conditions X ˜ 1 i , r , i=1,,N, are drawn independently from the same law within each population for each Monte Carlo simulation r=1,,M. One then chooses one neuron i α in each population α=1,,P. If the size N of the population is large enough, Theorem 4 states that the law, noted as p α (t,X), of X i α should be close to that of the solution X ¯ α of the mean-field equations for α=1,,P. Hence, in effect, simulating the network is a good approximation (see below) of the simulation of the mean-field or McKean-Vlasov equations [68, 69]. An approximation of p α (t,X) can be obtained from the Monte Carlo simulations by quantizing the phase space and incrementing the count of each bin whenever the trajectory of the i α neuron at time t falls into that particular bin. The resulting histogram can then be compared to the solution of the McKean-Vlasov-Fokker-Planck equation (Equation 24) corresponding to population α whose numerical solution is described next.

The mean square error between the solution of the numerical recursion (Equation 30) X ˜ n i and the solution of the mean-field equations (Equation 22) X ¯ n Δ t i is of order O( Δ t +1/ N ), the first term being related to the error made by approximating the solution of the network of size N, X n Δ t i , N by an Euler-Maruyama method, and the second term, to the convergence of X n Δ t i , N towards the mean-field equation X ¯ n Δ t i when considering globally Lipschitz continuous dynamics (see proof of Theorem 4 in the Appendix). In our case, as shown before, the dynamics is only locally Lipschitz continuous. Finding efficient and provably convergent numerical schemes to approximate the solutions of such stochastic differential equations is an area of active research. There exist proofs that some schemes are divergent [70] or convergent [71] for some types of drift and diffusion coefficients. Since our equations are not included in either case, we conjecture convergence since we did not observe any divergence and leave the proof for future work.

4.2 Numerical simulations of the McKean-Vlasov-Fokker-Planck equation

For solving the McKean-Vlasov-Fokker-Planck equation (Equation 24), we have used the method of lines [72, 73]. Its basic idea is to discretize the phase space and to keep the time continuous. In this way, the values p α (t,X), α=1,,P of the probability density function of population α at each sample point X of the phase space are the solutions of P ODEs where the independent variable is the time. Each sample point in the phase space generates P ODEs, resulting in a system of coupled ODEs. The solutions to this system yield the values of the probability density functions p α solution of (Equation 24) at the sample points. The computation of the integral terms that appear in the McKean-Vlasov-Fokker-Planck equation is achieved through a recursive scheme, the Newton-Cotes method of order 6 [74]. The dimensionality of the space being large and numerical errors increasing with the dimensionality of the integrand, such precise integration schemes are necessary. For an arbitrary real function f to be integrated between the values x 1 and x 2 , this numerical scheme reads:

x 1 x 2 f ( x ) d x 5 288 Δ x i = 1 M / 5 [ 19 f ( x 1 + ( 5 i 5 ) Δ x ) + 75 f ( x 1 + ( 5 i 4 ) Δ x ) + 50 f ( x 1 + ( 5 i 3 ) Δ x ) + 50 f ( x 1 + ( 5 i 2 ) Δ x ) + 75 f ( x 1 + ( 5 i 1 ) Δ x ) + 19 f ( x 1 + 5 i Δ x ) ] ,

where Δx is the integration step, and M=( x 2 x 1 )/Δx is chosen to be an integer multiple of 5.

The discretization of the derivatives with respect to the phase space parameters is done through the following fourth-order central difference scheme:

d f ( x ) d x f ( x 2 Δ x ) 8 f ( x Δ x ) + 8 f ( x + Δ x ) f ( x + 2 Δ x ) 12 Δ x ,

for the first-order derivatives, and

d 2 f ( x ) d x 2 ( f ( x 2 Δ x ) + 16 f ( x Δ x ) 30 f ( x ) + 16 f ( x + Δ x ) f ( x + 2 Δ x ) ) / ( 12 Δ x 2 )

for the second-order derivatives (see [75]).

Finally, we have used a Runge-Kutta method of order 2 (RK2) for the numerical integration of the resulting system of ODEs. This method is of the explicit kind for ordinary differential equations, and it is described by the following Butcher tableau:

4.3 Comparison between the solutions to the network and the mean-field equations

We illustrate these ideas with the example of a network of 100 FitzHugh-Nagumo neurons belonging to one, excitatory, population. We also use chemical synapses with the variation of the weights described by (Equation 11). We choose a finite volume, outside of which we assume that the probability density function (p.d.f.) is zero. We then discretize this volume with n V n w n Y points defined by

n V = def ( V max V min ) / Δ V , n w = def ( w max w min ) / Δ w , n y = def ( y max y min ) / Δ y ,

where V min , V max , w min , w max , y min and y max define the volume in which we solve the network equations and estimate the histogram defined in the ‘Numerical simulations of the McKean-Vlasov equations’ section, while ΔV, Δw and Δy are the quantization steps in each dimension of the phase space. For the simulation of the McKean-Vlasov-Fokker-Planck equation, instead, we use Dirichlet boundary conditions and assume the probability and its partial derivatives to be 0 on the boundary and outside the volume.

In general, the total number of coupled ODEs that we have to solve for the McKean-Vlasov-Fokker-Planck equation with the method of lines is the product P n V n w n y (in our case, we chose P=1). This can become fairly large if we increase the precision of the phase space discretization. Moreover, increasing the precision of the simulation in the phase space, in order to ensure the numerical stability of the method of lines, requires to decrease the time step Δt used in the RK2 scheme. This can strongly impact the efficiency of the numerical method (see the ‘Numerical simulations with GPUs’ section).

In the simulations shown in the left-hand parts of Figures 4 and 5, we have used one population of 100 excitatory FitzHugh-Nagumo neurons connected with chemical synapses. We performed 10,000 Monte Carlo simulations of the network equations (Equation 14) with the Euler-Maruyama method in order to approximate the probability density. The model for the time variation of the synaptic weights is the simple model. The p.d.f. p(0,V,w,y) of the initial condition is Gaussian and reads

p ( 0 , V , w , y ) = 1 ( 2 π ) 3 / 2 σ V 0 σ w 0 σ y 0 e ( V V ¯ 0 ) 2 / ( 2 σ V 0 2 ) ( w w ¯ 0 ) 2 / ( 2 σ w 0 2 ) ( y y ¯ 0 ) 2 / ( 2 σ y 0 2 ) .
(31)
Fig. 4
figure 4

Joint probability distribution. (V,w) computed with the Monte Carlo algorithm for the network equations (Equation 14) (left) compared with the solution of the McKean-Vlasov-Fokker-Planck equation (Equation 24) (right), sampled at four times t fin . Parameters are given in Table 1, with a current I=0.4 corresponding to a stable limit cycle. Initial conditions (first column of Table 1) are concentrated inside this limit cycle. The two distributions are similar and centered around the limit cycle with two peaks (see text).

Fig. 5
figure 5

Joint probability distribution. (V,y) computed with the Monte Carlo algorithm for the network equations (Equation 14) (left) compared with the solution of the McKean-Vlasov-Fokker-Planck equation (Equation 24) (right), sampled at four times t fin . Parameters are given in Table 1, with a current I=0.4 corresponding to a stable limit cycle. Initial conditions (first column of Table 1) are concentrated inside this limit cycle. The two distributions are similar and centered around the limit cycle with two peaks (see text).

The parameters are given in the first column of Table 1. In this table, the parameter t fin is the time at which we stop the computation of the trajectories in the case of the network equations and the computation of the solution of the McKean-Vlasov-Fokker-Planck equation in the case of the mean-field equations. The sequence [0.5,1.2,1.5,2.2] indicates that we compute the solutions at those four time instants corresponding to the four rows of Figures 4 and 5. The phase space has been quantized with the parameters shown in the second column of the same table to solve the McKean-Vlasov-Fokker-Planck equation. This quantization has also been used to build the histograms that represent the marginal probability densities with respect to the pairs (V,w) and (V,y) of coordinates of the state vector of a particular neuron. These histograms have then been interpolated to build the surfaces shown in the left-hand side of Figures 4 and 5. The parameters of the FitzHugh-Nagumo model are the same for each neuron of the population: they are shown in the third column of Table 1.

Table 1 Parameters used in the simulations of the neural network and for solving the McKean-Vlasov-Fokker-Planck equation

The parameters for the noisy model of maximum conductances of Equation 11 are shown in the fourth column of the table. For these values of J ¯ and σ J , the probability that the maximum conductances change sign is very small. Finally, the parameters of the chemical synapses are shown in the sixth column. The parameters Γ and Λ are those of the χ function (Equation 3). The solutions are computed over an interval of t fin =0.5,1.2,1.5,2.2 time units with a time sampling of Δt=0.1 for the network and Δt=0.01 for the McKean-Vlasov-Fokker-Planck equation. The rest of the parameters are the typical values for the FitzHugh-Nagumo equations.

The marginals estimated from the trajectories of the network solutions are then compared to those obtained from the numerical solution of the McKean-Vlasov-Fokker-Planck equation (see Figures 4 and 5 right), using the method of lines explained above and starting from the same initial conditions (Equation 31) as the neural network.

We have used the value I=0.4 for the external current (this value corresponds to the existence of a stable limit cycle for the isolated FitzHugh-Nagumo neuron), and the initial conditions have the values V ¯ 0 =0, w ¯ 0 =0.5 and y ¯ 0 =0.3; therefore, the initial points of the trajectories in the phase space are concentrated inside the limit cycle. We therefore expect that the solutions of the neural network and the McKean-Vlasov-Fokker-Planck equation will concentrate their mass around the limit cycle. This is what is observed in Figures 4 and 5, where the simulation of the neural network (left-hand side) is in very good agreement with the results of the simulation of the McKean-Vlasov-Fokker-Planck equation (right-hand side). Note that the densities display two peaks. These two peaks correspond to the fact that depending upon the position of the initial condition with respect to the nullclines of the FitzHugh-Nagumo equations, the points in the phase space follow two different classes of trajectories, as shown in Figure 6. The two peaks then rotate along the limit cycle in the (V,w) space (see also the ‘Numerical simulations with GPUs’ section).

Fig. 6
figure 6

Projection of 100 trajectories in the (V,w) (top left), (V,y) (top right) and (w,y) (bottom) planes. The limit cycle is especially visible in the (V,w) projection (red curves). The initial conditions split the trajectories into two classes corresponding to the two peaks shown in Figures 4 and 5. The parameters are the same as those used to generate these two pictures.

Figures 4 and 5 show a qualitative similarity between the marginal probability density functions obtained by simulating the network and those obtained by solving the Fokker-Planck equation corresponding to the mean-field equations. To make this more quantitative, we computed the Kullback-Leibler divergence D KL ( p Network || p MVFP ) between the two distributions.

We performed 10,000 Monte Carlo simulations of the network equations up to t fin =10 for increasing values of the network size N. As shown in Figure 7, the Kullback-Leibler divergence does decrease with increasing values of N, thereby confirming the fact that even for relatively small values of N, the average behavior of the network is well represented by the mean-field system described by the McKean-Vlasov-Fokker-Planck equation.

Fig. 7
figure 7

Variation of the Kullback-Leibler divergence. Variation of the Kullback-Leibler divergence between the marginal probability density function p(t,V,w) estimated from the network equations and computed from the McKean-Vlasov-Fokker-Planck equation as a function of the network size. We have performed 10,000 Monte Carlo simulations of the network equations up to time t fin =10.0.

4.4 Numerical simulations with GPUs

Unfortunately, the algorithm for solving the McKean-Vlasov-Fokker-Planck equation described in the previous section is computationally very expensive. In fact, when the number of points in the discretized grid of the (V,w,y) phase space is big, i.e. when the discretization steps ΔV, Δw and Δy are small, we also need to keep Δt small enough in order to guarantee the stability of the algorithm. This implies that the number of equations that must be solved has to be large and moreover that they must be solved with a small time step if we want to keep the numerical errors small. This will inevitably slow down the simulations. We have dealt with this problem by using a more powerful hardware, the graphical processing units (GPUs).

We have changed the Runge-Kutta scheme of order 2 used for the simulations shown in the ‘Numerical simulations of the McKean-Vlasov-Fokker-Planck equation’ section and adopted a more accurate Runge-Kutta scheme of order 4. This was done because with the more powerful machine, each computation of the right-hand side of the equation is faster, making it possible to use four calls per time step instead of two in the previous method. Hence, the parallel hardware allowed us to use a more accurate method.

One of the purposes of the numerical study is to get a feeling for how the different parameters, in particular those related to the sources of noise, influence the solutions of the McKean-Vlasov-Fokker-Planck equation. This is meant to prepare the ground for the study of the bifurcation of these solutions with respect to these parameters, as was done in [76] in a different context. For this preliminary study, we varied the input current I and the parameter σ ext controlling the intensity of the noise on the membrane potential in Equations 14. The McKean-Vlasov-Fokker-Planck equation writes in this case:Footnote 5

t p ( t , V , w , y ) = V { [ V V 3 3 w + I J ¯ ( V V rev ) R 3 y p ( t , V , w , y ) d V d w d y ] × p ( t , V , w , y ) } w [ c ( V + a b w ) p ( t , V , w , y ) ] y { [ a r S ( V ) ( 1 y ) a d y ] p ( t , V , w , y ) } + 1 2 2 V 2 { [ σ ext 2 + σ J 2 ( V V rev ) 2 ( R 3 y p ( t , V , w , y ) d V d w d y ) 2 ] × p ( t , V , w , y ) } + 1 2 σ w 2 2 w 2 p ( t , V , w , y ) + 1 2 2 y 2 { [ a r S ( V ) ( 1 y ) + a d y ] χ 2 ( y ) p ( t , V , w , y ) } .
(32)

The simulations were run with the χ function (Equation 3); the initial condition described by Equation 31 and the parameters are shown in Table 2. These parameters are similar to those used in the previous numerical simulations, but they differ in the size of the grid which is larger in this case.

Table 2 Parameters used in the simulations of the McKean-Vlasov-Fokker-Planck equation on GPUs

Four snapshots of the solution are shown in Figure 8 (corresponding to the values I=0.4 and σ ext =0.27 of the external input current and of the standard deviation of the noise on the membrane potential), and three are shown in Figure 9 (corresponding to the values I=0.7 and σ ext =0.45). In the figures, the left column corresponds to the values of the marginal p(t,V,w), and the right column corresponds to the values of the marginal p(t,V,y). Both are necessary to get an idea of the shape of the full distribution p(t,V,w,y). The first row of Figure 8 shows the initial conditions. They are the same for the results shown in Figure 9. The second, third and fourth rows of Figure 8 show the time instants t=30.0, t=50.0 and at convergence (the time units differ from those of the previous section, but it is irrelevant to this discussion). The three rows of Figure 9 show the time instants t=30.0, t=50.0 and at convergence. In both cases, the solution appears to converge to a stationary distribution whose mass is distributed over a ‘blurred’ version of the limit cycle of the isolated neuron. The ‘blurriness’ increases with the variance of the noise. The four movies for these two cases are available as Additional files 1, 2, 3 and 4.

Fig. 8
figure 8

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation. Marginals with respect to the V and w variables (left) and to the V and y variables (right) of the solution of the McKean-Vlasov-Fokker-Planck equation. The first row shows the initial condition; the second, the marginals at time 30.0; the third, the marginals at time 50.0; and the fourth, the stationary (large time) solutions. The input current I is equal to 0.4 and σ ext =0.27. These are screenshots at different times of movies available as Additional files 1 and 2.

Fig. 9
figure 9

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation. Marginals with respect to the V and w variables (left) and to the V and y variables (right) of the solution of the McKean-Vlasov-Fokker-Planck equation. The first row shows the marginals at time 30.0, the second the marginals at time 50.0 and the third the stationary (large time) solutions. The input current I is equal to 0.7 and σ ext =0.45. These are screenshots at different times of movies available as Additional files 3 and 4.

The results shown in Figures 8 and 9 and in Additional files 1, 2, 3 and 4 were obtained using two machines, each with seven nVidia Tesla C2050 cards, six 2.66 GHz dual-Xeon X5650 processors and 72G of ram. The communication inside each machine was done using the lpthreads library and between machines using MPI calls. The mean execution time per time step using the parameters already described is 0.05 s.

The reader interested in more details in the numerical implementations and in the gains that can be achieved by the use of GPUs can consult [77].

In Figure 10, we show a solution to the McKean-Vlasov-Fokker-Planck equation which is qualitatively quite different from the solutions shown in Figures 8 and 9: The stationary solution is concentrated at a point in (V,w,y) space. This is an indication that perhaps, between the values −0.8 and 0.4 of the input current, the solutions to the McKean-Vlasov-Fokker-Planck equation have bifurcated. The numerical tools we have developed may be a way to build an intuition to guide a rigorous analysis of these phenomena.

Fig. 10
figure 10

Marginals of the solutions to the McKean-Vlasov-Fokker-Planck equation at convergence. Marginals with respect to the V and w variables (left) and to the V and y variables (right) of the solution of the McKean-Vlasov-Fokker-Planck equation at convergence. The parameters are those in Table 1 except for the input current I which is equal to −0.8, σ ext =0.45 and t fin =2.2. Compare with the last row of Figure 9 (see text).

5 Discussion and conclusion

In this article, we addressed the problem of the limit in law of networks of biologically inspired neurons as the number of neurons tends to infinity. We emphasized the necessity of dealing with biologically inspired models and discussed at length the type of models relevant to this study. We chose to address the case conductance-based network models that are a relevant description of the neuronal activity. Mathematical results on the analysis of these diffusion processes in interaction resulted to the replacement of a set of NP d-dimensional coupled equations (the network equations) in the limit of large N s by P d-dimensional mean-field equations describing the global behavior of the network. However, the price to pay for this reduction was the fact that the resulting mean-field equations are nonstandard stochastic differential equations, similar to the McKean-Vlasov equations. These can be expressed either as implicit equations on the law of the solution or, in terms of probability density function through the McKean-Vlasov-Fokker-Planck equations, as a nonlinear, non-local partial differential equation. These equations are, in general, hard to study theoretically.

Besides the fact that we explicitly model real spiking neurons, the mathematical part of our work differs from that of previous authors such as McKean, Tanaka and Sznitman (see the ‘Introduction’ section) because we are considering several populations with the effect that the analysis is significantly more complicated. Our hypotheses are also more general, e.g. the drift and diffusion functions are nontrivial and satisfy the general condition (H4) which is more general than the usual linear growth condition. Also, they are only assumed locally (and not globally) Lipschitz continuous to be able to deal, for example, with the FitzHugh-Nagumo model. A locally Lipschitz continuous case was recently addressed in a different context for a model of swarming in [67].

Proofs of our results, for somewhat stronger hypotheses than ours and in special cases, are scattered in the literature, as briefly reviewed in the ‘Introduction’ and ‘Setting of the problem’ sections. Our main contribution is that we provide a complete, self-sufficient proof in a fairly general case by gathering all the ingredients that are required for our neuroscience applications. In particular, the case of the FitzHugh-Nagumo model where the drift function does not satisfy the linear growth condition involves a generalization of previous works using the more general growth condition (H4).

The simulation of these equations can itself be very costly. We, hence, addressed in the ‘Numerical simulations’ section numerical methods to compute the solutions of these equations, in the probabilistic framework, using the convergence result of the network equations to the mean-field limit and standard integration methods of differential equations or in the Fokker-Planck framework. The simulations performed for different values of the external input current parameter and one of the parameters controlling the noise allowed us to show that the spatio-temporal shape of the probability density function describing the solution of the McKean-Vlasov-Fokker-Planck equation was sensitive to the variations of these parameters, as shown in Figures 8 and 9. However, we did not address the full characterization of the dynamics of the solutions in the present article. This appears to be a complex question that will be the subject of future work. It is known that for different McKean-Vlasov equations, stationary solutions of these equations do not necessarily exist and, when they do, are not necessarily unique (see [78]). A very particular case of these equations was treated in [76] where the authors consider that the function f α is linear, g α is constant and b α β (x,y)= S β (y). This model, known as the firing-rate model, is shown in that paper to have the Gaussian solutions when the initial data is Gaussian, and the dynamics of the solutions can be exactly reduced to a set of 2P-coupled ordinary differential equations governing the mean and the standard deviation of the solution. Under these assumptions, a complete study of the solutions is possible, and the dependence upon the parameters can be understood through bifurcation analysis. The authors show that intrinsic noise levels govern the dynamics, creating or destroying fixed points and periodic orbits.

The mean-field description has also deep theoretical implications in neuroscience. Indeed, it points towards the fact that neurons encode their responses to stimuli through probability distributions. This type of coding was evoked by several authors [47], and the mean-field approach shows that under some mild conditions, this phenomenon arises: all neurons belonging to a particular population can be seen as independent realizations of the same process, governed by the mean-field equation. The relevance of this phenomenon is reinforced by the fact that it has recently been observed experimentally that neurons had correlation levels significantly below what had been previously reported [13]. This independence has deep implications on the efficiency of neural coding which the propagation of chaos theory accounts for. To illustrate this phenomenon, we have performed the following simulations. Considering a network of 2, 10 and 100 FitzHugh-Nagumo neurons, we have simulated 2,000 times the network equations over some time interval [0,100]. We have picked at random a pair of neurons and computed the time variation of the cross-correlation of the values of their state variables. The results are shown in Figure 11. It appears that the propagation of chaos is observable for relatively small values of the number of neurons in the network, thus indicating once more that the theory developed in this paper in the limit case of an infinite number of neurons is quite robust to finite-size effects.Footnote 6

Fig. 11
figure 11

Variations over time of the cross-correlation of (V,w,y) variables of several FitzHugh-Nagumo neurons in a network. Top left: 2 neurons. Top right: 10 neurons. Bottom: 100 neurons. The cross-correlation decreases steadily with the number of neurons in the network.

The present study develops theoretical arguments to derive the mean-field equations resulting from the activity of large neuron ensembles. However, the rigorous and formal approach developed here does not allow direct characterization of brain states. The paper, however, opens the way to rigorous analysis of the dynamics of large neuron ensembles through derivations of different quantities that may be relevant. A first approach could be to derive the equations of the successive moments of the solutions. Truncating this expansion would yield systems of ordinary differential equations that can give approximate information on the solution. However, the choice of the number of moments taken into account is still an open question that can raise several deep questions [46].

Appendix 1: Proof of Theorem 4

In this appendix, we prove the convergence of the network equations towards the mean-field equations (Equation 22) and of the propagation of chaos property. The proof follows standard proofs in the domain as generally done, in particular by Tanaka or Sznitman [6, 10], adapted to our particular case where we consider a non-zero drift function and a time- and space-dependent diffusion function. It is based on the very powerful coupling argument, which identifies the almost sure limit of the process X i as the number of neurons tends to infinity, as popularized by Sznitman in [12], but whose idea dates back from the 1970s (for instance, Dobrushin uses it in [5]). This process is exactly the solution of the mean-field equation driven by the same Brownian motion as X i and with the same initial condition random variable. In our case, this leads us to introduce the sequence of independent stochastic processes ( X ¯ t i ) i = 1 N having the same law as X ¯ α , α=p(i), solution of the mean-field equation:

d X ¯ t i = f α ( t , X ¯ t i ) d t + γ = 1 P E Z [ b α γ ( X ¯ t i , Z t γ ) ] d t + g α ( t , X ¯ t i ) d W t i + γ = 1 P E Z [ β α γ ( X ¯ t i , Z t γ ) ] d B t i ,
(33)

with initial condition X ¯ 0 i = X 0 i , the initial condition of the neuron i in the network, which was assumed to be independent and identically distributed. ( W t i ) and ( B t i ) are the Brownian motions involved in the network equation (Equation 21). As described previously, Z=( Z 1 ,, Z P ) is a process independent of X ¯ that has the same law. Denoting, as described previously, the probability distribution of X ¯ t α solution of the mean-field equation (Equation 22) by m t α , the law of the collection of processes ( X ¯ t i k ) for some fixed k N , namely m p ( i 1 ) m p ( i k ) , is shown to be the limit of the process ( X t i ) solution of the network equations (Equation 21) as N goes to infinity.

We recall, for completeness, Theorem 4:

Theorem 4 Under assumptions (H1) to (H4), the following holds true:

  • Convergence: For each neuron i of population α, the law of the multidimensional process X i , N converges towards the law of the solution of the mean-field equation related to population α, namely X ¯ α .

  • Propagation of chaos: For anyk N , and any k-uplet( i 1 ,, i k ), the law of the process( X t i 1 , N ,, X t i n , N ,tT)converges towards m t p ( i 1 ) m t p ( i n ) , i.e. the asymptotic processes have the law of the solution of the mean-field equations and are all independent.

Proof

On our way, we also prove that

max i = 1 N NE [ sup s T X s i , N X ¯ s i 2 ] <,
(34)

which implies, in particular, convergence in law of the process ( X t i , N ,tT) towards ( X ¯ t α ,tT) solution of the mean-field equations (Equation 22).

The proof basically consists of thoroughly analyzing the difference between the two processes as N tends to infinity. The difference is the sum of eight terms (we dropped the index N for the sake of simplicity of notations) denoted by A t through H t :

X t i X ¯ t i = 0 t f α ( s , X s i ) f α ( s , X ¯ s i ) d s A t + 0 t g α ( s , X s i ) g α ( s , X ¯ s i ) d W s i B t + γ = 1 P 0 t 1 N γ j = 1 N γ b α γ ( X s i , X s j ) b α γ ( X ¯ s i , X s j ) d s C t + γ = 1 P 0 t 1 N γ j = 1 N γ b α γ ( X ¯ s i , X s j ) b α γ ( X ¯ s i , X ¯ s j ) d s D t + γ = 1 P 0 t 1 N γ j = 1 N γ b α γ ( X ¯ s i , X ¯ s j ) E Z [ b α γ ( X ¯ s i , Z s γ ) ] d s E t + γ = 1 P 0 t 1 N γ j = 1 N γ β α γ ( X s i , X s j ) β α γ ( X ¯ s i , X s j ) d B s i γ F t + γ = 1 P 0 t 1 N γ j = 1 N γ β α γ ( X ¯ s i , X s j ) β α γ ( X ¯ s i , X ¯ s j ) d B s i γ G t + γ = 1 P 0 t 1 N γ j = 1 N γ β α γ ( X ¯ s i , X ¯ s j ) E Z [ β α γ ( X ¯ s i , Z s γ ) ] d B s i γ H t .
(35)

It is important to note that the probability distribution of these terms does not depend on the neuron i. We are interested in the limit, as N goes to infinity, of the quantity E[ sup s T X s i , N X ¯ s i 2 ]. We decompose this expression into the sum of the eight terms involved in Equation 35 using Hölder’s inequality and upperbound each term separately. The terms A t and B t are treated exactly as in the proof of Theorem 2. We start by assuming that f and g are uniformly globally K Lipschitz continuous with respect to the second variable. The locally Lipschitz case is treated in the same manner as done in the proof of Theorem 2 (1) by stopping the process at time τ U , (2) by using the Lipschitz continuity of f and g in the ball of radius U and (3) by a truncation argument and using the almost sure boundedness of the solutions extending the convergence to the locally Lipschitz case.

As seen previously, we have:

E [ sup s t A s 2 ] K 2 T 0 t E [ sup u s X u i X ¯ u i 2 ] d s , E [ sup s t B s 2 ] 4 K 2 0 t E [ sup u s X u i X ¯ u i 2 ] d s .

Now, for C t ,

C s 2 = γ = 1 P 0 s 1 N γ j = 1 N γ b α γ ( X u i , X u j ) b α γ ( X ¯ u i , X u j ) d u 2 (Cauchy-Schwarz) T P 0 s γ = 1 P 1 N γ j = 1 N γ b α γ ( X u i , X u j ) b α γ ( X ¯ u i , X u j ) 2 d u (assumption (H2)) T P L 2 0 s X u i X ¯ u i 2 d u .

Therefore,

sup s t C s 2 T P L 2 0 t X s i X ¯ s i 2 d s , E [ sup s t C s 2 ] T P L 2 0 t E [ sup u s X u i X ¯ u i 2 ] d s .

Similarly, for D t ,

sup s t D s 2 T 0 t γ = 1 P 1 N γ j = 1 N γ b α γ ( X ¯ s i , X s j ) b α γ ( X ¯ s i , X ¯ s j ) 2 d s (Cauchy-Schwartz) P T 0 t ( γ = 1 P 1 N γ j = 1 N γ b α γ ( X ¯ s i , X s j ) b α γ ( X ¯ s i , X ¯ s j ) 2 ) d s (assumption (H2)) P T L 2 0 t ( γ = 1 P 1 N γ j = 1 N γ X s j X ¯ s j 2 ) d s .

Hence, we have:

E [ sup s t D s 2 ] P T L 2 0 t ( γ = 1 P 1 N γ j = 1 N γ E [ X s j X ¯ s j 2 ] ) d s P T L 2 0 t ( γ = 1 P 1 N γ j = 1 N γ E [ sup u s X u j X ¯ u j 2 ] ) d s .

Therefore,

E [ sup s t D s 2 ] P 2 T L 2 0 t max j = 1 N E [ sup u s X u j X ¯ u j 2 ] ds.

The terms F t and G t are treated in the same fashion, but instead of using the Cauchy-Schwartz inequality, the Burkholder-Davis-Gundy martingale moment inequality are used. For F t , in detail,

Similarly, for G t , we obtain:

E [ sup s t G s 2 ] 4 L 2 P 0 t max j = 1 N E [ sup 0 u s X u j X ¯ u j 2 ] ds.

We are left with the problem of controlling the terms E t and H t that involve sums of processes with bounded second moment, thanks to Proposition 3 and assumption (H3). We have:

E [ sup s t E s 2 ] = E [ sup s t 0 s γ = 1 P 1 N γ j = 1 N γ b α γ ( X ¯ u i , X ¯ u j ) E Z [ b α γ ( X ¯ u i , Z u ) ] d u 2 ] (Cauchy-Schwartz) T P γ = 1 P 0 t E [ 1 N γ j = 1 N γ b α γ ( X ¯ s i , X ¯ s j ) E Z [ b α γ ( X ¯ s i , Z s ) ] 2 ] d s ,

and using the Burkholder-Davis-Gundy martingale moment inequality,

E [ sup s t H s 2 ] 4P γ = 1 P 0 t E [ 1 N γ j = 1 N γ β α γ ( X ¯ s i , X ¯ s j ) E Z [ β α γ ( X ¯ s i , Z s γ ) ] 2 ] ds.

Each of these two expressions involves an expectation which we write:

E [ 1 N γ j = 1 N γ Θ ( X ¯ s i , X ¯ s j ) E Z [ Θ ( X ¯ s i , Z s γ ) ] 2 ] ,

where Θ{ b α γ , β α γ } and expand as:

1 N γ 2 j , k = 1 N γ E [ ( Θ ( X ¯ s i , X ¯ s j ) E Z [ Θ ( X ¯ s i , Z s γ ) ] ) T ( Θ ( X ¯ s i , X ¯ s k ) E Z [ Θ ( X ¯ s i , Z s γ ) ] ) ] .

All the terms of the sum corresponding to indexes j and k such that the three conditions ji, ki and jk are satisfied are null since in that case, X ¯ t i , X ¯ t j , X ¯ t k and Z t γ are independent and have the same law for p(j)=p(k)=γ.Footnote 7 In effect, denoting the measure of their common law by m t γ , we have:

E [ ( Θ ( X ¯ s i , X ¯ s j ) E Z [ Θ ( X ¯ s i , Z s γ ) ] ) T ( Θ ( X ¯ s i , X ¯ s k ) E Z [ Θ ( X ¯ s i , Z s γ ) ] ) ] = E [ Θ ( X ¯ s i , X ¯ s j ) T Θ ( X ¯ s i , X ¯ s k ) ] E [ Θ ( X ¯ s i , X ¯ s j ) T Θ ( X ¯ s i , z ) m s γ ( d z ) ] E [ Θ ( X ¯ s i , z ) T m s γ ( d z ) Θ ( X ¯ s i , X ¯ s k ) ] + E [ Θ ( X ¯ s i , z ) T m s γ ( d z ) Θ ( X ¯ s i , z ) m s γ ( d z ) ] ,

expanding further and renaming the second z variable to y in the last term, we obtain:

Θ ( x , y ) T Θ ( x , z ) m s γ ( d x ) m s γ ( d y ) m s γ ( d z ) Θ ( x , y ) T Θ ( x , z ) m s γ ( d z ) m s γ ( d x ) m s γ ( d y ) Θ ( x , z ) T m s γ ( d z ) Θ ( x , y ) m s γ ( d x ) m s γ ( d y ) + Θ ( x , z ) T m s γ ( d z ) Θ ( x , y ) m s γ ( d y ) m s γ ( d x )

which is indeed equal to 0 by the Fubini theorem.

Therefore, there are no more than 3 N γ non-null terms in the sum, and all the terms have the same value (that depends on Θ), which is bounded by Lemma 3 and assumption (H3). We denote the supremum of these 2 P 2 values for Θ{ b α γ , β α γ } across all possible pairs of populations by C/3 , and the smallest value of the N γ ,γ=1P by N min . We have shown that

E [ sup s t E s 2 ] andE [ sup s t H s 2 ] 4 C T P 2 N min .

Finally, we have:

max i = 1 N E [ sup s t X s i X ¯ s i 2 ] K 1 0 t max j = 1 N E [ sup u s X u j X ¯ u j 2 ] du+ K 2 N min ,

for some positive constants K 1 and K 2 . Using Gronwall’s inequality, we obtain:

max i = 1 N E [ sup s t X s i X ¯ s i 2 ] K 3 N min
(36)

for some positive constant K 3 . The right-hand side of this inequality tends to zero as N goes to infinity proving the propagation of chaos property. In order to show a convergence with speed 1/ N as stated in the theorem, we use the fact:

max i = 1 N NE [ sup s T X s i , N X ¯ s i 2 ] K 3 N N min ,

and the right-hand side of the inequality is bounded for all N s because of the hypothesis lim N N α N = c α (0,1) for α=1P. This ends the proof. □

Electronic Supplementary Material

Notes

  1. More precisely, as shown in [79, 80], the convergence is to a larger - 13-dimensional - system with an invariant four-dimensional manifold on which the solution lives given appropriate initial conditions. See also [81].

  2. As we will see in the proof, most properties are valid as soon as N α tends to infinity as N goes to infinity for all α{1,,P}, the previous assumption will allow quantifying the speed of convergence towards the asymptotic regime.

  3. The type of convergence is specified in the proof given in the Appendix.

  4. The notation m t α was introduced right after Equation 22.

  5. We have included a small noise (controlled by the parameter σ w ) on the adaptation variable w. This does not change the previous analysis, in particular proposition 1, but makes the McKean-Vlasov-Fokker-Planck equation well-posed in a cube of the state space with 0 boundary value, see e.g. [82].

  6. Note that we did not estimate the correlation within larger networks since, as predicted by Theorem 4, it will be smaller and smaller, requiring an increasingly large number of Monte Carlo simulations.

  7. Note that ij and ik as soon as p(i)p(j)=p(k)=γ. In the case where p(i)=γ, it is easy to check that when j (respectively, k) is equal to i, all terms such that kj (respectively, jk) are equal to 0.

References

  1. McKean H: A class of Markov processes associated with nonlinear parabolic equations. Proc Natl Acad Sci USA 1966,56(6):1907–1911. 10.1073/pnas.56.6.1907

    Article  MathSciNet  Google Scholar 

  2. McKean H: Propagation of chaos for a class of non-linear parabolic equations. Lecture Series in Differential Equations 7. In Stochastic Differential Equations. Air Force Office Sci. Res., Arlington; 1967:41–57.

    Google Scholar 

  3. Braun W, Hepp K: The Vlasov dynamics and its fluctuations in the 1/n limit of interacting classical particles. Commun Math Phys 1977,56(2):101–113. 10.1007/BF01611497

    Article  MathSciNet  Google Scholar 

  4. Dawson D: Critical dynamics and fluctuations for a mean-field model of cooperative behavior. J Stat Phys 1983, 31: 29–85. 10.1007/BF01010922

    Article  Google Scholar 

  5. Dobrushin RL: Prescribing a system of random variables by conditional distributions. Theory Probab Appl 1970, 15: 458–486. 10.1137/1115049

    Article  Google Scholar 

  6. Tanaka H: Probabilistic treatment of the Boltzmann equation of Maxwellian molecules. Probab Theory Relat Fields 1978, 46: 67–105.

    Google Scholar 

  7. Tanaka H, Hitsuda M: Central limit theorem for a simple diffusion model of interacting particles. Hiroshima Math J 1981,11(2):415–423.

    MathSciNet  Google Scholar 

  8. Tanaka H: Some probabilistic problems in the spatially homogeneous Boltzmann equation. In Theory and Application of Random Fields Lecture Notes in Control and Information Sciences. Edited by: Kallianpur G. Springer, Berlin; 1983:258–267.

    Chapter  Google Scholar 

  9. Tanaka H: Limit theorems for certain diffusion processes with interaction. North-Holland Mathematical Library 32. In Stochastic Analysis. North-Holland, Amsterdam; 1984:469–488.

    Google Scholar 

  10. Sznitman A: Nonlinear reflecting diffusion process, and the propagation of chaos and fluctuations associated. J Funct Anal 1984,56(3):311–336. 10.1016/0022-1236(84)90080-6

    Article  MathSciNet  Google Scholar 

  11. Sznitman A: A propagation of chaos result for Burgers’ equation. Probab Theory Relat Fields 1986,71(4):581–613. 10.1007/BF00699042

    Article  MathSciNet  Google Scholar 

  12. Sznitman AS: Topics in propagation of chaos. Lecture Notes in Math. 1464. In Ecole d’Eté de Probabilités de Saint-Flour XIX 1989. Edited by: Burkholder D, Pardoux E, Sznitman AS. Springer, Berlin; 1991:165–251.

    Chapter  Google Scholar 

  13. Ecker A, Berens P, Keliris G, Bethge M, Logothetis N, Tolias A: Decorrelated neuronal firing in cortical microcircuits. Science 2010,327(5965):584. 10.1126/science.1179867

    Article  Google Scholar 

  14. Hodgkin A, Huxley A: A quantitative description of membrane current and its application to conduction and excitation in nerve. J Physiol 1952, 117: 500–544.

    Article  Google Scholar 

  15. Fitzhugh R: Theoretical effect of temperature on threshold in the Hodgkin-Huxley nerve model. J Gen Physiol 1966,49(5):989–1005. 10.1085/jgp.49.5.989

    Article  Google Scholar 

  16. FitzHugh R: Mathematical models of excitation and propagation in nerve. In Biological Engineering. Edited by: Schwan HP. McGraw-Hill Book Co., New York; 1969:1–85.

    Google Scholar 

  17. Izhikevich EM: Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge; 2007.

    Google Scholar 

  18. Lapicque L: Recherches quantitatifs sur l’excitation des nerfs traitee comme une polarisation. J. Physiol. Paris 1907, 9: 620–635.

    Google Scholar 

  19. Tuckwell HC: Introduction to Theoretical Neurobiology. Cambridge University Press, Cambridge; 1988.

    Book  Google Scholar 

  20. Ermentrout GB, Terman D Interdisciplinary Applied Mathematics. In Foundations of Mathematical Neuroscience.. Springer, Berlin; 2010.

    Chapter  Google Scholar 

  21. FitzHugh R: Mathematical models of threshold phenomena in the nerve membrane. Bull Math Biol 1955,17(4):257–278.

    Google Scholar 

  22. Nagumo J, Arimoto S, Yoshizawa S: An active pulse transmission line simulating nerve axon. Proc IRE 1962, 50: 2061–2070.

    Article  Google Scholar 

  23. Destexhe A, Mainen Z, Sejnowski T: Synthesis of models for excitable membranes, synaptic transmission and neuromodulation using a common kinetic formalism. J Comput Neurosci 1994,1(3):195–230. 10.1007/BF00961734

    Article  Google Scholar 

  24. Kandel ER, Schwartz JH, Jessel TM: Principles of Neural Science. 4th edition. McGraw-Hill, New York; 2000.

    Google Scholar 

  25. Cox JC, Ingersoll JC Jr, Ross SA: A theory of the term structure of interest rates. Econometrica 1985,53(2):385–407. 10.2307/1911242

    Article  MathSciNet  Google Scholar 

  26. Mao X: Stochastic Differential Equations and Applications. 2nd edition. Horwood, Chichester; 2008.

    Book  Google Scholar 

  27. Amari S: Characteristics of random nets of analog neuron-like elements. IEEE Trans Syst Man Cybern 1972,2(5):643–657.

    Article  MathSciNet  Google Scholar 

  28. Amari S: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 1977,27(2):77–87. 10.1007/BF00337259

    Article  MathSciNet  Google Scholar 

  29. Wilson H, Cowan J: Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 1972, 12: 1–24.

    Article  Google Scholar 

  30. Wilson H, Cowan J: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol Cybern 1973,13(2):55–80.

    Google Scholar 

  31. Hammerstein A: Nichtlineare Integralgleichungen nebst Anwendungen. Acta Math 1930, 54: 117–176. 10.1007/BF02547519

    Article  MathSciNet  Google Scholar 

  32. Faugeras O, Grimbert F, Slotine JJ: Absolute stability and complete synchronization in a class of neural fields models. SIAM J Appl Math 2008, 61: 205–250. 10.1093/qjmam/hbn003

    Article  MathSciNet  Google Scholar 

  33. Coombes S, Owen MR: Bumps, breathers, and waves in a neural network with spike frequency adaptation. Phys Rev Lett 2005.,94(14): Article ID 148102 Article ID 148102

  34. Ermentrout B: Neural networks as spatio-temporal pattern-forming systems. Rep Prog Phys 1998, 61: 353–430. 10.1088/0034-4885/61/4/002

    Article  Google Scholar 

  35. Ermentrout G, Cowan J: Temporal oscillations in neuronal nets. J Math Biol 1979,7(3):265–280. 10.1007/BF00275728

    Article  MathSciNet  Google Scholar 

  36. Laing C, Troy W, Gutkin B, Ermentrout G: Multiple bumps in a neuronal model of working memory. SIAM J Appl Math 2002, 63: 62–97. 10.1137/S0036139901389495

    Article  MathSciNet  Google Scholar 

  37. Chossat P, Faugeras O: Hyperbolic planforms in relation to visual edges and textures perception. PLoS Comput Biol 2009.,5(12): Article ID e1000625 Article ID e1000625

  38. Veltz R, Faugeras O: Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dyn Syst 2010,9(3):954–998. 10.1137/090773611

    Article  MathSciNet  Google Scholar 

  39. Abbott L, Van Vreeswijk C: Asynchronous states in networks of pulse-coupled neuron. Phys Rev 1993, 48: 1483–1490. 10.1103/PhysRevA.48.1483

    Article  Google Scholar 

  40. Amit D, Brunel N: Model of global spontaneous activity and local structured delay activity during delay periods in the cerebral cortex. Cereb Cortex 1997, 7: 237–252. 10.1093/cercor/7.3.237

    Article  Google Scholar 

  41. Brunel N, Hakim V: Fast global oscillations in networks of integrate-and-fire neurons with low firing rates. Neural Comput 1999, 11: 1621–1671. 10.1162/089976699300016179

    Article  Google Scholar 

  42. Brunel N: Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons. J Comput Neurosci 2000, 8: 183–208. 10.1023/A:1008925309027

    Article  Google Scholar 

  43. El Boustani S, Destexhe A: A master equation formalism for macroscopic modeling of asynchronous irregular activity states. Neural Comput 2009, 21: 46–100. 10.1162/neco.2009.02-08-710

    Article  MathSciNet  Google Scholar 

  44. Mattia M, Del Giudice P: Population dynamics of interacting spiking neurons. Phys Rev E, Stat Nonlinear Soft Matter Phys 2002.,66(5): Article ID 51917 Article ID 51917

  45. Cai D, Tao L, Shelley M, McLaughlin DW: An effective kinetic representation of fluctuation-driven neuronal networks with application to simple and complex cells in visual cortex. Proc Natl Acad Sci USA 2004,101(20):7757–7762. 10.1073/pnas.0401906101

    Article  Google Scholar 

  46. Ly C, Tranchina D: Critical analysis of dimension reduction by a moment closure method in a population density approach to neural network modeling. Neural Comput 2007,19(8):2032–2092. 10.1162/neco.2007.19.8.2032

    Article  MathSciNet  Google Scholar 

  47. Rolls ET, Deco G: The Noisy Brain: Stochastic Dynamics as a Principle of Brain Function. Oxford University Press, Oxford; 2010.

    Book  Google Scholar 

  48. Softky WR, Koch C: The highly irregular firing of cortical cells is inconsistent with temporal integration of random EPSPs. J Neurosci 1993, 13: 334–350.

    Google Scholar 

  49. Brunel N, Latham P: Firing rate of noisy quadratic integrate-and-fire neurons. Neural Comput 2003, 15: 2281–2306. 10.1162/089976603322362365

    Article  Google Scholar 

  50. Plesser HE: Aspects of signal processing in noisy neurons. PhD thesis. Georg-August-Universität; 1999. Plesser HE: Aspects of signal processing in noisy neurons. PhD thesis. Georg-August-Universität; 1999.

  51. Touboul J, Faugeras O: First hitting time of double integral processes to curved boundaries. Adv Appl Probab 2008,40(2):501–528. 10.1239/aap/1214950214

    Article  MathSciNet  Google Scholar 

  52. Beggs JM, Plenz D: Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures. J Neurosci 2004,24(22):5216–5229. 10.1523/JNEUROSCI.0540-04.2004

    Article  Google Scholar 

  53. Benayoun M, Cowan JD, van Drongelen W, Wallace E: Avalanches in a stochastic model of spiking neurons. PLoS Comput Biol 2010.,6(7): Article ID e1000846 Article ID e1000846

  54. Levina A, Herrmann JM, Geisel T: Phase transitions towards criticality in a neural system with adaptive interactions. Phys Rev Lett 2009.,102(11): Article ID 118110 Article ID 118110

  55. Touboul J, Destexhe A: Can power-law scaling and neuronal avalanches arise from stochastic dynamics? PLoS ONE 2010.,5(2): Article ID e8982 Article ID e8982

  56. Bressloff P: Stochastic neural field theory and the system-size expansion. SIAM J Appl Math 2009, 70: 1488–1521.

    Article  MathSciNet  Google Scholar 

  57. Buice MA, Cowan JD: Field-theoretic approach to fluctuation effects in neural networks. Phys Rev E, Stat Nonlinear Soft Matter Phys 2007.,75(5): Article ID 051919 Article ID 051919

  58. Buice M, Cowan J, Chow C: Systematic fluctuation expansion for neural network activity equations. Neural Comput 2010,22(2):377–426. 10.1162/neco.2009.02-09-960

    Article  MathSciNet  Google Scholar 

  59. Ohira T, Cowan J: Master-equation approach to stochastic neurodynamics. Phys Rev E, Stat Nonlinear Soft Matter Phys 1993,48(3):2259–2266. 10.1103/PhysRevE.48.2259

    Article  Google Scholar 

  60. Treves A: Mean-field analysis of neuronal spike dynamics. Network 1993,4(3):259–284. 10.1088/0954-898X/4/3/002

    Article  MathSciNet  Google Scholar 

  61. Gerstner W: Time structure of the activity in neural network models. Phys Rev E, Stat Nonlinear Soft Matter Phys 1995, 51: 738–758. 10.1103/PhysRevE.51.738

    Article  Google Scholar 

  62. Faugeras O, Touboul J, Cessac B: A constructive mean-field analysis of multi-population neural networks with random synaptic weights and stochastic inputs. Front Comput Neurosci 2009. doi:10.3389/neuro.10.001.2009 doi:10.3389/neuro.10.001.2009

    Google Scholar 

  63. Guionnet A: Averaged and quenched propagation of chaos for spin glass dynamics. Probab Theory Relat Fields 1997,109(2):183–215. 10.1007/s004400050130

    Article  MathSciNet  Google Scholar 

  64. Chizhov AV, Graham LJ: Population model of hippocampal pyramidal neurons, linking to refractory density approach to conductance-based neurons. Phys Rev E, Stat Nonlinear Soft Matter Phys 2007., 75: Article ID 011924 Article ID 011924

    Google Scholar 

  65. Sompolinsky H, Crisanti A, Sommers H: Chaos in random neural networks. Phys Rev Lett 1988,61(3):259–262. 10.1103/PhysRevLett.61.259

    Article  MathSciNet  Google Scholar 

  66. Sompolinsky H, Zippelius A: Relaxational dynamics of the Edwards-Anderson model and the mean-field theory of spin-glasses. Phys Rev B, Condens Matter Mater Phys 1982,25(11):6860–6875. 10.1103/PhysRevB.25.6860

    Article  Google Scholar 

  67. Bolley F, Cañizo JA, Carrillo JA: Stochastic mean-field limit: non-Lipschitz forces and swarming. Math Models Methods Appl Sci 2011,21(11):2179–2210. 10.1142/S0218202511005702

    Article  MathSciNet  Google Scholar 

  68. Talay D, Vaillant O: A stochastic particle method with random weights for the computation of statistical solutions of McKean-Vlasov equations. Ann Appl Probab 2003, 13: 140–180.

    Article  MathSciNet  Google Scholar 

  69. Bossy M, Talay D: A stochastic particle method for the McKean-Vlasov and the Burgers equation. Math Comput 1997,66(217):157–192. 10.1090/S0025-5718-97-00776-X

    Article  MathSciNet  Google Scholar 

  70. Hutzenthaler M, Jentzen A, Kloeden P: Strong and weak divergence in finite time of Euler’s method for stochastic differential equations with non-globally Lipschitz continuous coefficients. Proc R Soc, Math Phys Eng Sci 2011,467(2130):1563–1576. 10.1098/rspa.2010.0348

    Article  MathSciNet  Google Scholar 

  71. Hutzenthaler M, Jentzen A: Convergence of the stochastic Euler scheme for locally Lipschitz coefficients. Found Comput Math 2011,11(6):657–706. 10.1007/s10208-011-9101-9

    Article  MathSciNet  Google Scholar 

  72. Schiesser W: The Numerical Method of Lines: Integration of Partial Differential Equations. Academic Press, San Diego; 1991.

    Google Scholar 

  73. Schiesser WE, Griffiths GW: A Compendium of Partial Differential Equation Models: Method of Lines Analysis with Matlab. 1st edition. Cambridge University Press, New York; 2009.

    Book  Google Scholar 

  74. Ueberhuber CW: Numerical Computation 2: Methods, Software, and Analysis. Springer, Berlin; 1997.

    Book  Google Scholar 

  75. Morton KW, Mayers DF: Numerical Solution of Partial Differential Equations: An Introduction. Cambridge University Press, Cambridge; 2005.

    Book  Google Scholar 

  76. Touboul J, Hermann G, Faugeras O: Noise-induced behaviors in neural mean field dynamics. SIAM J Appl Dyn Syst 2012,11(1):49–81. 10.1137/110832392

    Article  MathSciNet  Google Scholar 

  77. Baladron J, Fasoli D, Faugeras O: Three applications of GPU computing in neuroscience. Comput Sci Eng 2012, 14:40–47. Baladron J, Fasoli D, Faugeras O: Three applications of GPU computing in neuroscience. Comput Sci Eng 2012, 14:40-47.

  78. Herrmann S, Tugaut J: Non-uniqueness of stationary measures for self-stabilizing processes. Stoch Process Appl 2010,120(7):1215–1246. 10.1016/j.spa.2010.03.009

    Article  MathSciNet  Google Scholar 

  79. Pakdaman K, Thieullen M, Wainrib G: Fluid limit theorems for stochastic hybrid systems with application to neuron models. Adv Appl Probab 2010,42(3):761–794. 10.1239/aap/1282924062

    Article  MathSciNet  Google Scholar 

  80. Wainrib G: Randomness in neurons: a multiscale probabilistic analysis. PhD thesis. Ecole Polytechnique; 2010. Wainrib G: Randomness in neurons: a multiscale probabilistic analysis. PhD thesis. Ecole Polytechnique; 2010.

  81. Goldwyn JH, Imennov NS, Famulare M, Shea-Brown E: Stochastic differential equation models for ion channel noise in Hodgkin-Huxley neurons. Phys Rev E, Stat Nonlinear Soft Matter Phys 2011.,83(4): Article ID 041908 Article ID 041908

  82. Evans LC Graduate Studies in Mathematics 19. In Partial Differential Equations. American Mathematical Society, Providence; 1998.

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by the ERC grant #227747 NerVi, the FACETS-ITN Marie-Curie Initial Training Network #237955 and the IP project BrainScaleS #269921.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Olivier Faugeras.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JB and DF developed the code for solving the stochastic differential equations, the McKean-Vlasov equations and the McKean-Vlasov-Fokker-Planck equations. They ran the numerical experiments and generated all the figures. DF derived some of the McKean-Vlasov equations in a heuristic fashion. OF and JT developed the models, proved the theorems and wrote the paper. All authors read and approved the final manuscript.

Electronic supplementary material

Additional file 1: Time evolution of the (V,w) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The four images in the left part of Figure 8 are four snapshots of this movie taken at time 0 (initial condition), time 30, time 50 and at a large enough time for the solution to be stationary. The input current is equal to 0.4, and the standard deviation of the membrane potential noise, to 0.27. (AVI 2.0 MB) (AVI 5 MB)

Additional file 2: Time evolution of the (V,y) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The four images in the right part of Figure 8 are four snapshots of this movie taken at time 0 (initial condition), time 30, time 50 and at a large enough time for the solution to be stationary. The input current is equal to 0.4, and the standard deviation of the membrane potential noise, to 0.27. (AVI 1.5 MB) (AVI 4 MB)

Additional file 3: Time evolution of the (V,w) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The three images in the left part of Figure 9 are three snapshots of this movie taken at time 30, time 50 and at a large enough time for the solution to be stationary. The input current is equal to 0.7, and the standard deviation of the membrane potential noise, to 0.45. (AVI 3.0 MB) (AVI 3 MB)

Additional file 4: Time evolution of the (V,y) marginal of the solution to the McKean-Vlasov-Fokker-Planck equation. The three images in the right part of Figure 9 are three snapshots of this movie taken at time 30, time 50, and at a large enough time for the solution to be stationary. The input current is equal to 0.7, and the standard deviation of the membrane potential noise, to 0.45. (AVI 2.2 MB) (AVI 3 MB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Authors’ original file for figure 21

Authors’ original file for figure 22

Authors’ original file for figure 23

Authors’ original file for figure 24

Authors’ original file for figure 25

Authors’ original file for figure 26

Authors’ original file for figure 27

Authors’ original file for figure 28

Authors’ original file for figure 29

Authors’ original file for figure 30

Authors’ original file for figure 31

Authors’ original file for figure 32

Authors’ original file for figure 33

Authors’ original file for figure 34

Authors’ original file for figure 35

Authors’ original file for figure 36

Authors’ original file for figure 37

Authors’ original file for figure 38

Authors’ original file for figure 39

Authors’ original file for figure 40

Authors’ original file for figure 41

Authors’ original file for figure 42

Authors’ original file for figure 43

Authors’ original file for figure 44

Authors’ original file for figure 45

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Baladron, J., Fasoli, D., Faugeras, O. et al. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons. J. Math. Neurosc. 2, 10 (2012). https://doi.org/10.1186/2190-8567-2-10

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-2-10

Keywords