### Abstract

We consider a conductance-based neural network inspired by the generalized Integrate and Fire model introduced by Rudolph and Destexhe in 1996. We show the existence and uniqueness of a unique Gibbs distribution characterizing spike train statistics. The corresponding Gibbs potential is explicitly computed. These results hold in the presence of a time-dependent stimulus and apply therefore to non-stationary dynamics.

### 1 Introduction

Neural networks have an overwhelming complexity. While an isolated neuron can exhibit a wide variety of responses to stimuli [1], from regular spiking to chaos [2,3], neurons coupled in a network via synapses (electrical or chemical) may show an even wider variety of collective dynamics [4] resulting from the conjunction of non-linear effects, time propagation delays, synaptic noise, synaptic plasticity, and external stimuli [5]. Focusing on the action potentials, this complexity is manifested by drastic changes in the spikes activity, for instance when switching from spontaneous to evoked activity (see for example A. Riehle’s team experiments on the monkey motor cortex [6-9]). However, beyond this, complexity may exist some hidden laws ruling an (hypothetical) “neural code” [10].

One way of unraveling these hidden laws is to seek some regularities or reproducibility
in the statistics of spikes. While early investigations on spiking activities were
focusing on firing rates where neurons are considered as independent sources, researchers
concentrated more recently on collective statistical indicators such as pairwise correlations.
Thorough experiments in the retina [11,12] as well as in the parietal cat cortex [13] suggested that such correlations are crucial for understanding spiking activity.
Those conclusions where obtained using the *maximal entropy principle*[14]. Assume that the average value of observables quantities (e.g., firing rate or spike
correlations) has been measured. Those average values constitute constraints for the
statistical model. In the maximal entropy principle, *assuming stationarity*, one looks for the probability distribution which maximizes the statistical entropy
given those constraints. This leads to a (time-translation invariant) Gibbs distribution.
In particular, fixing firing rates and the probability of pairwise coincidences of
spikes lead to a Gibbs distribution having the same form as the Ising model. This
idea has been introduced by Schneidman et al. in [11] for the analysis of retina spike trains. They reproduce accurately the probability
of *spatial* spiking pattern. Since then, their approach has known a great success (see, e.g.,
[15-17]), although some authors raised solid objections on this model [12,18-20] while several papers have pointed out the importance of *temporal* patterns of activity at the network level [21-23]. As a consequence, a few authors [13,24,25] have attempted to define time-dependent models of Gibbs distributions where constraints
include time-dependent correlations between pairs, triplets, and so on [26]. As a matter of fact, the analysis of the data of [11] with such models describes more accurately the statistics of *spatio-temporal* spike patterns [27].

Taking into account all constraints inherent to experiments, it seems extremely difficult to find an optimal model describing spike trains statistics. It is in fact likely that there is not one model, but many, depending on the experiment, the stimulus, the investigated part of the nervous system and so on. Additionally, the assumptions made in the works quoted above are difficult to control. Especially, the maximal entropy principle assumes a stationary dynamics while many experiments consider a time-dependent stimulus generating a time-dependent response where the stationary approximation may not be valid. At this stage, having an example where one knows the explicit form of the spike trains, probability distribution would be helpful to control those assumptions and to define related experiments.

This can be done considering neural network models. Although, to be tractable, such models may be quite away from biological plausibility, they can give hints on which statistics can be expected in real neural networks. But, even in the simplest examples, characterizing spike statistics arising from the conjunction of non-linear effects, time propagation delays, synaptic noise, synaptic plasticity, and external stimuli is far from being trivial on mathematical grounds.

In [28], we have nevertheless proposed an exact and explicit result for the characterization of spike trains statistics in a discrete-time version of Leaky Integrate-and-Fire neural network. The results were quite surprising. It has been shown that whatever the parameters value (in particular synaptic weights), spike trains are distributed according to a Gibbs distribution whose potential can be explicitly computed. The first surprise lies in the fact that this potential has infinite range, namely spike statistics has an infinite memory. This is because the membrane potential evolution integrates its past values and the past influence of the network via the leak term. Although leaky integrate and fire models have a reset mechanism that erases the memory of the neuron whenever it spikes, it is not possible to upper bound the next time of firing. As a consequence, statistics is non-Markovian (for recent examples of non-Markovian behavior in neural models see also [29]). The infinite range of the potential corresponds, in the maximal entropy principle interpretation, to having infinitely many constraints.

Nevertheless, the leak term influence decays exponentially fast with time (this property
guarantees the existence and uniqueness of a Gibbs distribution). As a consequence,
one can approximate the exact Gibbs distribution by the invariant probability of a
Markov chain, with a memory depth proportional to the log of the (discrete time) leak
term. In this way, the truncated potential corresponds to a finite number of constraints
in the maximal entropy principle interpretation. However, the second surprise is that
this approximated potential is nevertheless far from the Ising model or any of the
models discussed above, which appear as quite bad approximations. In particular, there
is a need to consider *n*-uplets of spikes with time delays. This mere fact asks hard problems about evidencing
such type of potentials in experiments. Especially, new type of algorithms for spike
trains analysis has to be developed [30].

The model considered in [28] is rather academic: time evolution is discrete, synaptic interactions are instantaneous, dynamics is stationary (the stimulus is time-constant) and, as in a leaky integrate and fire model, conductances are constant. It is therefore necessary to investigate whether our conclusions remain for more realistic neural networks models. In the present paper, we consider a conductance-based model introduced by Rudolph and Destexhe in [31] called “generalized Integrate and Fire” (gIF) model. This model allows one to consider realistic synaptic responses and conductances depending on spikes arising in the past of the network, leading to a rather complex dynamics which has been characterized in [32] in the deterministic case (no noise in the dynamics). Moreover, the biological plausibility of this model is well accepted [33,34].

Here, we analyze spike statistics in the gIF model with noise and with a time-dependent stimulus. Moreover, the post-synaptic potential profiles are quite general and summarize all the examples that we know in the literature. Our main result is to prove the existence and uniqueness of a Gibbs measure characterizing spike trains statistics, for all parameters compatible with physical constraints (finite synaptic weights, bounded stimulus, and positive conductances). Here, as in [28], the corresponding Gibbs potential has infinite range corresponding to a non-Markovian dynamics, although Markovian approximations can be proposed in the gIF model too. The Gibbs potential depends on all parameters in the model (especially connectivity and stimulus) and has a form quite more complex than Ising-like models. As a by-product of the proof of our main result, additional interesting notions and results are produced such as continuity, with respect to a raster, or exponential decay of memory thanks to the shape of synaptic responses.

The paper is organized as follows. In Section 2, we briefly introduce integrate and fire models and propose two important extensions of the classical models: the spike has a duration and the membrane potential is reset to a non-constant value. These extensions, which are necessary for the validity of our mathematical results, render nevertheless the model more biologically plausible (see Section 9). One of the keys of the present work is to consider spike trains (raster plots) as infinite sequences. Since in gIF models, conductances are updated upon the occurrence of spikes, one has to consider two types of variables with distinct type of dynamics. On the one hand, the membrane potential, which is the physical variable associated with neurons dynamics, evolves continuously. On the other hand, spikes are discrete events. Conductances are updated according to these discrete-time events. The formalism introduced in Sections 2 and 3 allows us to handle properly this mixed dynamics. As a consequence, these sections define gIF model with more mathematical structure than the original paper [31] and mostly contain original results. Moreover, we add to the model several original features such as the consideration of a general form of synaptic profile with exponential decay or the introduction of noise. Section 4 proposes a preliminary analysis of gIF model dynamics. In Sections 5 and 6, we provide several useful mathematical propositions as a necessary step toward the analysis of spike statistics, developed in Section 7, where we prove the main result of the paper: existence and uniqueness of a Gibbs distribution describing spike statistics. Sections 8 and 9 are devoted to a discussion on practical consequences of our results for neuroscience.

### 2 Integrate and fire model

We consider the evolution of a set of *N* neurons. Here, neurons are considered as “points” instead of spatially extended and
structured objects. As a consequence, we define, for each neuron
*k* at time *t*” without specification of which part of a real neuron (axon, soma, dendritic spine, …)
it corresponds to. Denote

We focus here on “integrate and fire models”, where dynamics always consists of two regimes.

#### 2.1 The “integrate regime”

Fix a real number *θ* called the “firing threshold of the neuron”.^{1} Below the threshold,
*k*’s dynamics is driven by an equation of the form:

where
*k*. In its most general form, the neuron *k*’s membrane conductance
*t*. The explicit form of
*t* and on the past activity of the network. It also contains a stochastic component
modeling noise in the system (e.g., synaptic transmission, see Section 3.5).

#### 2.2 LIF model

A classical example of integrate and fire model is the Leaky Integrate and Fire’s (LIF) introduced in [36] where Equation (1) reads:

where

#### 2.3 Spikes

The dynamical evolution (1) may eventually lead
*θ*. If, at some time *t*,
*k* emits a spike or “fires”. In our model, like in biophysics, a spike has a finite
duration
*δ* is of order of a millisecond. Changing the time units, we may set

**Fig. 1.** Time course of the membrane potential in our model. The *blue dashed curve* illustrates the shape of a real spike, but what we model is the *red curve*.

#### 2.4 Raster plots

In experiments, spiking neuron activity is represented by “raster plots”, namely
a graph with time in abscissa, and a neuron labeling in ordinate such that a vertical
bar is drawn each “time” a neuron emits a spike. Since spikes have a finite duration
*δ* such a representation limits the time resolution: events with a time scale smaller
than *δ* are not distinguished. As a consequence, if neuron 1 fires at time

As a consequence, one has to consider two types of variables with distinct type of dynamics. On the one hand, the membrane potential, which is the physical variable associated with neuron dynamics, evolves with a continuous time. On the other hand, spikes, which are the quantities of interest in the present paper, are discrete events. To properly define this mixed dynamics and study its properties, we have to model spikes times and raster plots.

#### 2.5 Spike times

If, at time *t*,
*integer time immediately after**t*, called the spike time. Choosing integers for the spike time occurrence is a direct
consequence of setting
*k* and integer *n*, we associate a “spiking state” defined by:

For convenience and in order to simplify the notations in the mathematical developments,
we call
*t* (thus
*t* is

#### 2.6 Reset

In the classical formulation of integrate and fire models, the spike occurs *simultaneously* with a reset of the membrane potential to some *constant* value

The reason why the reset time is the integer number

Indeed, in our model, the reset value

#### 2.7 The shape of membrane potential during the spike

On biophysical grounds, the time course of the membrane potential during the spike
includes a depolarization and re-polarization phase due to the non-linear effects
of gated ionic channels on the conductance. This leads to introduce, in modeling,
additional variables such as activation/inactivation probabilities as in the Hodgkin-Huxley
model [35] or adaptation current as, e.g., in FitzHugh-Nagumo model [39-42] (see the discussion section for extensions of our results to those models). Here,
since we are considering only one variable for the neuron state, the membrane potential,
we need to define the spike profile, i.e., the course of

#### 2.8 Mathematical representation of raster plots

The “spiking pattern” of the neural network at integer time *n* is the vector
*m* and *n*. Such sequences are called *spike blocks*. Additionally, we note that

Call
*n*.

To each raster
*r*-th time of firing of neuron *j* in the raster *ω*. In other words, we have
*k* is used for a post-synaptic neuron while the index *j* refers to pre-synaptic neurons. Spiking events are used to update the conductance
of neuron *k* according to spikes emitted by pre-synaptic neurons. That is why we label the spike
times with an index *j*.

We introduce here two specific rasters which are of use in the paper. We note Ω_{0} the raster such that
_{1} the raster

Finally, we use the following notation borrowed from [43]. We note, for
*r* integer:

For simplicity, we consider that

#### 2.9 Representation of time-dependent functions

Throughout the paper, we use the following convention. For a real function of *t* and *ω*, we write
*t* and of the spike block
*before**t*. This constraint is imposed by causality.

#### 2.10 Last reset time

We define
*t* where neuron *k*’s membrane potential has been reset, in the raster *ω*. This is −∞ if the membrane potential has never been reset. As a consequence of our
choice (4) for the reset time
*t* and the raster before *t*. The membrane potential value of neuron *k* at time *t* is controlled by the reset value
*t*.

### 3 Generalized integrate and fire models

In this paper, we concentrate on an extension of (2), called “generalized Integrate-and-Fire” (gIF), introduced in [31], closer to biology [33,34], since it considers more seriously neurons interactions via synaptic responses.

#### 3.1 Synaptic conductances

Depending on the neurotransmitter they use for synaptic transmission (AMPA, NMDA,
GABA A, GABA B [44]), neurons can be excitatory (population

The variation in the membrane potential of neuron *k* at time *t* reads:

where

where

We may rewrite Equation (6) in the form (1) setting

and

#### 3.2 Conductance update upon a spike occurrence

The conductances
*t* but also on pre-synaptic spikes occurring before *t*. This is a general statement, which is modeled in gIF models as follows. Upon arrival
of a spike in the pre-synaptic neuron *j* at time
*k* is modified as:

In this equation, the quantity
*j* and *k*. This allows us to encode the graph structure of the neural network in the matrix
*G* with entries

The function

(exponential profile) or:

with *H* the Heaviside function (that mimics causality) and
*t* is a time, the division by

Contrarily to (9) the synaptic profile (10), with
*α* profile may obey a Green equation of type [45]:

where

#### 3.3 Mathematical constraints on the synaptic responses

In all the paper, we assume that the

for some integer *d*. So that

This has the following consequence. For all *t*,
*r* integer, we have, setting

Therefore, as

where
*d*.

We introduce the following (Hardy) notation: if a function

**Proposition 1**

*as*

Additionally, the constraint (11) implies that there is some
*t*, for all *k*, *j*:

Indeed, for

Due to (11), this series converges (e.g., from Cauchy criterion). We set:

On physical grounds, it implies that the conductance

#### 3.4 Synaptic summation

Assume that Equation (8) remains valid for an arbitrary number of pre-synaptic spikes
emitted by neuron *j* within a finite time interval
*t*, upon the arrival of spikes at times

The conductance at time *s*,
*j*’s activity preceding *s*. This term is therefore unknown unless one knows exactly the past evolution before
*s*. One way to circumvent this problem is to taking *s* arbitrary far in the past, i.e., taking
*s* in our notations. Taking *s* arbitrary far in the past corresponds to assuming that the system has evolved long
enough so that it has reached sort of a “permanent regime”, not necessarily stationary,
when the observation starts. On phenomenological grounds, it is enough to take −*s* larger than all characteristic relaxation times in the system (e.g., leak rate and
synaptic decay rate). Here, for mathematical purposes, it is easier to take the limit

Since
*t*, via the spiking times
*ω*. In other words, one can know the value of the conductances
*t* only if the past spike times of the network are known. We write

We set

with the convention that
_{0} is the raster such that no neuron ever fires). The limit (14) exists from (13).

#### 3.5 Noise

We allow, in the definition of the current

where
*k* but this extension is straightforward and we do not develop it here. The noise term
can be interpreted as the random variation in the ionic flux of charges crossing the
membrane per unit time at the post-synaptic button, upon opening of ionic channels
due to the binding of neurotransmitter.

We assume that
*N*-dimensional Wiener process. Call *P* the noise probability distribution and
*P*. Then, by definition,

#### 3.6 Differential equation for the integrate regime of gIF

Summarizing, we write Equation (6) in the form:

where:

This is the more general conductance form considered in this paper.

Moreover,

where

These equations hold when the membrane potential is below the threshold (Integrate regime).

Therefore, gIF models constitute rather complex dynamical systems: the vector field
(r.h.s) of the differential Equation (16) depends on an auxiliary “variable”, which
is the past spike sequence
*t* to later times one needs to know the spikes arising before *t*. This is precisely what makes gIF models more interesting than LIF. The definition
of conductances introduces long-term *memory* effects.

IF models implement a reset mechanism on the membrane potential: If neuron *k* has been reset between *s* and *t*, say at time *τ*, then
*is not* erased by the reset. That is why we have to consider a system with infinite memory.

#### 3.7 The parameters space

The stochastic dynamical system (16) depends on a huge set of parameters: the membrane
capacities
*θ*, the reversal potentials

In this paper, we are not interested in describing properties arising for specific
values of those parameters, but instead in generic properties that hold on sets of
parameters. More specifically, we denote the list of all parameters
*γ*. This is a vector in
*K* is the total number of parameters. In this paper, we assume that *γ* belongs to a bounded subset

### 4 gIF model dynamics for a fixed raster

We assume that the raster *ω* is fixed, namely the spike history is given. Then, it is possible to integrate the
Equation (16) (Integrate regime) and to obtain explicitly the value of the membrane
potential of a neuron at time *t*, given the membrane potential value at time *s*. Additionally, the reset condition (4) has the consequence of removing the dependence
of neuron *k* on the past anterior to

#### 4.1 Integrate regime

For

We have:

and:

Fix two times
*k*,

We have then integrating the previous equation with respect to
*s* and *t* and setting

This equation gives the variation in membrane potential during a period of rest (no
spike) of the neuron. Note, however, that this neuron can still receive spikes from
the other neurons via the update of conductances (made explicit in the previous equation
by the dependence in the raster plot *ω*).

The term

has the dimension of a voltage. It corresponds to the integration of the total current
between *s* and *t* weighted by the effective leak term

where,

is the synaptic contribution. Moreover,

where we set:

the characteristic leak time of neuron *k*. We have included the leak reversal potential term in this “external” term for convenience.
Therefore, even if there is no external current, this term is nevertheless non-zero.

The sum of the synaptic and external terms gives the deterministic contribution in the membrane potential. We note:

Finally,

is a noise term. This is a Gaussian process with mean 0 and variance:

The square root of this quantity has the dimension of a voltage.

As a final result, for a fixed *ω*, the variation in membrane potential during a period of rest (no spike) of neuron
*k* between *s* and *t* reads (subthreshold oscillations):

#### 4.2 Reset

In Equation (4), as in all IF models that we know, the reset of the membrane potential
has the effect of removing the dependence of
*k* fires between *s* and *t* in the raster *ω*. As a consequence, Equation (24) holds, from the “last reset time” introduced in
Section 2.10 up to time *t*. Then, Equation (24) reads

where:

is a Gaussian process with mean zero and variance:

### 5 Useful bounds

We now prove several bounds used throughout the paper.

#### 5.1 Bounds on the conductance

From (13), and since

Therefore,

so that the conductance is uniformly bounded in *t* and *ω*. The minimal conductance is attained when no neuron fires ever so that Ω_{0} is the “lowest conductance state”. On the opposite, the maximal conductance is reached
when all neurons fire all the time so that Ω_{1} is the “highest conductance state”. To simplify notations, we note
*k* while

#### 5.2 Bounds on membrane potential

Now, from (19), we have, for

As a consequence,

Moreover,

so that:

Thus,

Establishing similar bounds for

In this case,

so that:

Consequently,

**Proposition 2**

*which provides uniform bounds in**s*, *t*, *ω**for the deterministic part of the membrane potential*.

#### 5.3 Bounds on the noise variance

Let us now consider the stochastic part

If
*mutatis mutandis* for the right-hand side. We set:

so that:

**Proposition 3**

#### 5.4 The limit
τ
k
(
t
,
ω
)
→
−
∞

For fixed *s* and *t*, there are infinitely many rasters such that
*positive* whatever

Fix *s* real. For all *ω* such that
*k* does not fire between *s* and *t* - we have, from (28), (31),

exists as well. The same holds for the external term

Finally, since

which is a Gaussian process with mean 0 and a variance

### 6 Continuity with respect to a raster

#### 6.1 Definition

Due to the particular structure of gIF models, we have seen that the membrane potential
at time *t* is both a function of *t* and of the full sequence of past spikes

**Definition 1** Let *m* be a positive integer. The *m*-*variation* of a function

where the definition of

**Definition 2** The function
*continuous* if

An additional information is provided by the convergence rate to 0 with *m*. The faster this convergence, the smaller the error made when replacing an infinite
raster by a spike block on a finite time horizon.

#### 6.2 Continuity of conductances

**Proposition 4***The conductance*
*is continuous in**ω*, *for all**t*, *for all*

*Proof* Fix

since the set of firing times

Therefore, as

which converges to 0 as

Therefore, from (17),

which converges exponentially fast to 0 as

#### 6.3 Continuity of the membrane potentials

**Proposition 5***The deterministic part of the membrane potential*,
*is continuous and its**m*-*variation decays exponentially fast with**m*.

*Proof* In the proof, we shall establish precise upper bounds for the variation in

Therefore, from (31),

and
*ω*.

Now, the product

so that:

Since, as

we have,

which converges to 0 as

Let us show the continuity of

The following inequality is used at several places in the paper. For a

Here, it gives, for

For the first term, we have,

Let us now consider the second term. If

So, we have, for the variation of

so that finally,

with

and

Now, let us show the continuity of
*ω*. We have:

where, in the last inequality, we have used that the supremum in the variation is
attained for

where,

and

As a conclusion,

#### 6.4 Continuity of the variance of
V
k
(
noise
)
(
τ
k
(
t
,
⋅
)
,
t
,
⋅
)

Using the same type of arguments, one can also prove that

**Proposition 6***The variance*
*is continuous in**ω*, *for all**t*, *for all*

*Proof* We have, from (27)

For the first term, we have that the sup in

For the second term, we have:

so that finally,

with

and continuity follows. □

#### 6.5 Remark

Note that the variation in all quantities considered here is exponentially decaying
with a time constant given by

### 7 Statistics of raster plots

#### 7.1 Conditional probability distribution of
V
k
(
t
)

Recall that *P* is the joint distribution of the noise and
*P*. Under *P*, the membrane potential *V* is a stochastic process whose evolution, below the threshold, is given Equations
(24), (25) and above by (4). It follows from the previous analysis that:

**Proposition 7***Conditionally to*
*is Gaussian with mean*:

*and covariance*:

*where*
*is given by* (27).

*Moreover*, *the*
*’s*,
*are conditionally independent*.

*Proof* Essentially, the proof is a direct consequence of Equations (24), (25) and the Gaussian
nature of the noise

□

#### 7.2 The transition probability

We now compute the probability of a spiking pattern at time

**Proposition 8***The probability of*
*conditionally to*
*is given by*:

*with*

*where*