Skip to main content

Adaptation and Fatigue Model for Neuron Networks and Large Time Asymptotics in a Nonlinear Fragmentation Equation

Abstract

Motivated by a model for neural networks with adaptation and fatigue, we study a conservative fragmentation equation that describes the density probability of neurons with an elapsed time s after its last discharge.

In the linear setting, we extend an argument by Laurençot and Perthame to prove exponential decay to the steady state. This extension allows us to handle coefficients that have a large variation rather than constant coefficients. In another extension of the argument, we treat a weakly nonlinear case and prove total desynchronization in the network. For greater nonlinearities, we present a numerical study of the impact of the fragmentation term on the appearance of synchronization of neurons in the network using two “extreme” cases.

Mathematics Subject Classification (2000)2010:35B40, 35F20, 35R09, 92B20.

1 Introduction

This article is devoted to study the large time behavior of the solution to a conservative aggregation-fragmentation equation, a class of equations that arises in many applications and that has been widely studied both in the linear case [8, 14, 18, 19] and with nonlinearities [6, 911, 13, 20].

Our particular motivation is an extension of the elapsed time neural population model, a partial differential equation structured by “age” studied in [1517], and which gives a new approach to the understanding of synchronization/desynchronization of neural assemblies with respect to the strength of their interconnections. Here, we add a fragmentation term in the model in order to incorporate the fact that the dynamics of the neurons are also related to their past activity, notably that neurons display adaptation and fatigue. That is a progressive decrease of their propensity to firing in response to a step maintained current. This is one of the most common neuronal properties that can introduce correlation in firing times. In this work, we examine whether and how the inclusion of this property can affect the dynamics of neural assemblies. As a consequence, the mathematical study of this equation is more complex. Based on the ideas in [12], we give a new result of exponential decay of the solution to its stationary state in the case where the network is weakly connected.

We consider the following equation:

{ n ( s , t ) t + n ( s , t ) s + p ( s , N ( t ) ) n ( s , t ) = K ( s , u ) p ( u , N ( t ) ) n ( u , t ) d u , s 0 , t 0 , n ( 0 , t ) = 0 , N ( t ) : = 0 + p ( s , N ( t ) ) n ( s , t ) d s , n ( s , 0 ) = n 0 ( s ) 0 , 0 n 0 ( s ) d s = 1 .
(1)
  • n(s,t)

    denotes the probability density of neurons at time t such that the time elapsed since the last discharge is s. It is a fundamental property which follows from our assumptions that, for all times t0,

    0 + n(s,t)ds= 0 + n 0 (s)ds=1,n(s,t)0.
    (2)
  • N(t)

    represents the flux of neurons which discharge at time t and is identified to the global amplitude of stimulation of the network.

  • p(s,N)

    models the firing rate of neurons submitted to a stimulation of amplitude N and such that the time elapsed since the last discharge is s. The coupling between the neurons is taking into account via the function p which varies according to the global activity N(t). Hence, in this model, the strength of interconnections between the neurons is taking into account via the variations of p with respect to the variable N.

  • The kernel K(s,u)M([0,)×[0,)), the set of nonnegative measures in [0,)×[0,), gives the distribution of neurons which take the state s when a discharge occurs after an elapsed time u since their last discharge.

The structured nature of Eq. (1) is related to the choice of the description of the dynamic of the neurons, which is made via the time elapsed since their last discharge. The term “fragmentation” stems from the fact that, at each time, the density of neurons which discharge is fragmented, via K, with respect to the new state of neurons after their discharge; each fragment is given by the flux of neurons which discharge and come back in a same state s.

The main question we address here is to prove exponential convergence as t for the nonlinear problem (1) to a steady state solution A(s), that is,

{ A ( s ) s + p ( s , A ) A ( s ) = K ( s , u ) p ( u , A ) A ( u ) d u , s 0 , A ( 0 ) = 0 , 0 A ( s ) d s = 1 , A = p ( s , A ) A ( s ) d s .
(3)

The existence of a stationary solution is proved in Sect. 5 and we attack to convergence through an adaptation of the strategy in [16]. For the linear problem, we construct some kind of spectral gap which opens the door to also treat ‘small’ (in a weak sense) nonlinearities.

The paper is organized as follows. In Sect. 2, we state our main results after giving assumptions on the coefficients; we separate the linear and nonlinear cases because we can prove much stronger results in the linear case. In Sect. 3, we study the solution of the linear version of Eq. (1) more precisely we prove its large time convergence to the stationary state with exponential decay; this is the proof of Theorem 2.1. Section 4 is devoted to the nonlinear case and to the proofs of Theorems 2.2 and 2.3. We prove the existence of stationary states, i.e., solutions to (3) in Sect. 5. In the last Sect. 6, we present numerical results in the case where the nonlinearity is strong enough to obtain periodic solutions to understand the effect of the fragmentation term in regard to the appearance of spontaneous activity in the network. Several general or technical results are postponed to Appendices in order to focus more the proofs on the main arguments.

2 Assumptions and Main Results

We need technical assumptions on the coefficients p and K in (1) and before we write them in full generality, we begin with a particular example. For the kernel K, a Dirac mass at 0, K(s,u):= δ s = 0 , the equation is equivalently written as a age structured equation and this situation is covered in [16, 17]. In this case, the interpretation is clear: After they discharge, all neurons take the same state s=0, irrespective of the time elapsed since their last discharge.

A more general example of kernel K is to take

K(s,u):=δ ( s ψ ( u ) ) ,

where ψ is a given increasing function. In this situation, the post discharge state s of the neurons only depends on their discharge state u (a cumulative time elapsed since their last discharge). Still more general is when K(s,) is a function: this includes variability in the neuron population or randomness in their behavior.

Also, a convenient example of discharge rate p(s,) is a regularized Heavyside function

p(s,N)= H δ ( s σ ( N ) ) ,0< σ σ() σ + <,

with σ a given smooth function. It is a caricature for modeling three desirable properties of the neurons:

  • immediately after a discharge, the neuron enters a refractory period, i.e., after a discharge, a neuron cannot discharge again during a certain time interval; this is the assumption that p(s,N)=0 for s small,

  • after the end of its refractory period, the neuron rapidly recovers a significative sensitive state,

  • for an excitatory system, a larger stimulation on the neuron induces smaller refractory period that N p(s,N)>0 and this is written here as σ <0 even though this assumption is not used in the present analysis.

Those examples of functions p and K are covered by more general assumptions, and links between these quantities, which we explain now. In Appendix A, we give explicitly the conditions on the two functions σ and ψ which are induced by our assumptions below.

2.1 General Assumptions

Assumptions on the Rate p(s,N)

0p(s,N) p M <+, 0 | s p ( s , N ) | ds<+.
(4)

There is a bounded function σ:(0,)(0, σ + ] such that

p(s,N) p M sσ(N),p(s,N)< p M for s<σ(N).
(5)

Finally, we assume

s p L ( [ 0 , ) × [ 0 , ) ) + 2 s 2 p L ( [ 0 , ) × [ 0 , ) ) + s N p L ( [ 0 , ) × [ 0 , ) ) < + ,
(6)
N p L ( [ 0 , ) × [ 0 , ) ) η<1and small enough.
(7)

In particular, this assumption and (5) guarantee that, for n a probability density, there is a unique positive solution N to

N= 0 p(s,N)n(s)ds.

In other words, the nonlinearity in (1) is well determined.

Assumptions on the Distribution K(s,u)

The first assumption expresses that neurons which discharge in a state u come back in an “earlier” state s

K(s,u)0,K(s,u)=0s>u, s = 0 u K(s,u)ds=1u>0.
(8)

These assumptions are fundamental in order to guarantee that n(,t) is a probability as written in (2), but also that K(s,u)p(u)n(u)du is well defined for n an integrable function.

Our second assumption is a structural property on K(,) which appears for aggregation-fragmentation equations in [12]. In our context, it says that the shortest is the time elapsed for a discharge, the earliest is the state after discharge. To do this, we define

f(s,u):= x = 0 s K(x,u)dx,
(9)

and assume that

Φ(s,u):= u f(s,u)0.
(10)

The assumptions (8) imply the following properties:

0f1,f(s,u)=1for su,Φ(s,u)=0for s>u.
(11)

The third assumption imposes a significant change of state after discharge; namely we assume that there is a constant 0θ<1 such that for all u0,

0 + Φ(s,u)ds= u 0 u sK(s,u)dsθ<1.
(12)

As a consequence,

0 u sK(s,u)dsθu.
(13)

Assumptions Linking p and K

The following assumptions which link p and K allow us to prove, in the case of weakly connected neurons, convergence of the solution of Eq. (1) to the stationary state A(s) in (3) with an exponential rate.

We use the notation

p (s)=p ( s , A ) , σ =σ ( A ) .
(14)

Our strongest assumption is a smallness assumption on σ and θ, but not on p M , which is written as follows. Let

B := e 0 σ p ( w ) d w [ σ 0 σ | p ( s ) | d s + θ 0 σ p ( s ) d s ] >0.

We assume that

B <1, e 0 σ p ( w ) d w 1 B 0 σ Φ(u,s)du+ σ + Φ(u,s)du< p M .
(15)

Notice that the second condition can be replaced by the stronger, but more direct inequality

θ e 0 σ p ( w ) d w 1 B < p M .

In the linear case, we also use sometimes one further assumption

μ>0such that sup u σ p(u) [ 0 u K ( s , u ) e μ ( u s ) d s 1 ] <μ.
(16)

We can precise a little the meaning of this assumption. At μ=0, and reversing (13) with some 0 θ <θ, we compute

d d μ [ 0 u K ( s , u ) e μ ( u s ) d s 1 ] = 0 u K(s,u) e μ ( u s ) (us)dsu(1 θ )

so that (16) holds for μ small if

(1 θ ) sup 0 u σ p(u)u<1.

This is again to say that σ is small enough but not necessarily p M .

2.2 Exponential Decay for the Linear Equation

The linear equation arises as the limiting case where we neglect interconnections within the network, that is,

p(s,N)p(s),σ(N)σ.

Our first theorem gives a result of exponential decay of solutions to the linear equation of (1) toward the steady state A built in Sect. 5. There are several routes toward this goal. A spectral gap can be proved using Poincaré type inequalities; this idea has been developed in [1, 4, 5] and is for smooth kernels K. A probabilistic approach has also been developed; see [2, 3] and the references therein.

Here, we follow yet another approach, developed in [12, 19], which handles singular kernels as measures and Dirac masses. It uses the auxiliary functions

m(s,t):=n(s,t)A(s),
(17)
M(s,t)= 0 s [ n ( x , t ) A ( x ) ] dx= 0 s m(x,t)dx,
(18)
J(s,t)= t M(s,t).
(19)

With these notations and the function P(s)>0 (with p =p in the linear case at hand) constructed in Appendix C, we can state our first result.

Theorem 2.1 (Exponential decay. Linear case)

We make the assumptions (4), (5), (8), (10), (12), (15), (16) on p and K with p independent on the variable N, and assume that

P(s) | M ( s , 0 ) | ds<+,P(s) | J ( s , 0 ) | ds<+,
(20)

where P(s) is a function uniformly bounded from below defined in Lemma  C.1. Then there exists ν>0 such that

0 P ( s ) | n ( s , t ) A ( s ) | d s C e ν t ( 0 + P ( s ) | M ( s , 0 ) | d s + 0 + P ( s ) | J ( s , 0 ) | d s + 0 σ | n ( s , 0 ) A ( s ) | d s ) .

2.3 Exponential Decay for the Nonlinear Equation

With the notations and preparation of the linear case, we can state our second theorem on exponential decay for weakly nonlinear equation and when p is smooth enough. For a better presentation of the proofs, we separate our statements in two theorems.

Theorem 2.2 (Exponential decay. Nonlinear case)

We make all the assumptions of Sect. 2.1 on p and K, and assume furthermore that

P(s) | M ( s , 0 ) | ds<+,

still with P constructed in Lemma  C.1. Then there exists C>0, ν>0 such that

0 P(s) | M ( s , t ) | ds e ν t 0 + P(s) | M ( s , 0 ) | ds,
(21)
| A N ( t ) | C e ν t ,
(22)
| N ( t ) | C e ν t .
(23)

Theorem 2.3 (Decay on m)

With the assumptions of Theorem  2.2, with

P(s) | J ( s , 0 ) | ds<+,

there exist two constants C>0, ν>0 such that

0 + | m ( s , t ) | dsC e ν t .
(24)

3 Exponential Decay for the Linear Equation

We prove the time decay with exponential rate as stated in Theorem 2.1, for the linear equation. In this situation, the function p does not depend on N and assumption (5) can be written as

p(s,N)p(s),p(s)= p M s>σ.

The strategy of the proof of Theorem 2.1 is to observe that exponential decay for |n(s,t)A(s)| follows from exponential decay for the functions M(s,t) and its first time derivative, which is much easier to prove than for m itself; the counterpart is that it involves a weighted norm, as expressed in Theorem 2.1, at variance with the Poincaré method in [1, 4, 5]. Indeed, there are two main advantages of considering the solutions M(s,t), J(s,t), instead of m(s,t):=n(s,t)A(s); (i) they satisfy a closed equation, (ii) the dual problem to the corresponding stationary equation has a negative first eigenvalue. This directly implies exponential decay of both 0 + P(s)|M(s,t)|ds and 0 + P(s)|J(s,t)|ds.

We split the proof of Theorem 2.1 in two steps:

  • In the first part, we check that the proof of Theorem 2.1 is a direct consequence of the exponential decay in L 1 of P(s)|M(s,t)| and P(s)|J(s,t)|.

  • The second part is devoted to prove exponential decay in L 1 of P(s)|M(s,t)| and P(s)|J(s,t)|.

3.1 Reduction to Exponential Decay on M(s,t) and t M(s,t)

We derive the Theorem 2.1 from the following proposition.

Proposition 3.1 Assume that there are constants λ<0, B>0 and a function P such that

1 B P(x)BP(y)for 0xy,
(25)

and

{ 0 + P ( s ) | M ( s , t ) | d s C e λ t 0 + P ( s ) | M ( s , 0 ) | d s , 0 + P ( s ) | J ( s , t ) | d s C e λ t 0 + P ( s ) | J ( s , 0 ) | d s .

Then there exists a constant C and a ν>0 such that

0 + | n ( s , t ) A ( s ) | d s C e ν t ( 0 + P ( s ) | M ( s , 0 ) | d s + 0 + P ( s ) | J ( s , 0 ) | d s + 0 σ | n ( s , 0 ) A ( s ) | d s ) .

Proof Using Eq. (29) for M (see the first step of the proof of Proposition 3.2), we obtain

m ( s , t ) = s M ( s , t ) = J ( s , t ) p ( s ) M ( s , t ) u = s + p ( u ) M ( u , t ) f ( s , u ) d u + Φ ( s , u ) p ( u ) M ( u , t ) d u .
(26)

We first use this relation to handle the values s>σ; then p (u)=0 in (26) and it is reduced to

| m ( s , t ) | | J ( s , t ) | + | p ( s ) M ( s , t ) | +Φ(s,u)p(u) | M ( u , t ) | du.

Multiplying this inequality by P, integrating between σ and +∞ with respect to the variable s, we obtain using estimate (11) and that Φ(s,u)=0 for s>u, that

σ + P ( s ) | m ( s , t ) | d s 2 σ + P ( s ) ( | M ( s , t ) | + | J ( s , t ) | ) d s + σ + ( σ + Φ ( s , u ) P ( s ) d s ) p ( u ) | M ( u , t ) | d u .

Using assumption (12) on Φ, we obtain that for all u(σ,+), there exists r u (σ,u) such that

σ + P(s) I s < u Φ(s,u)dsθP( r u )θBP(u),

the second inequality being a consequence of assumption (25) on P. We deduce that there exists a constant C such that

σ P(s) | m ( s , t ) | dsC 0 + P(s) ( | M ( s , t ) | + | J ( s , t ) | ) ds.
(27)

Finally, with the exponential decay assumptions on M and J in Proposition 3.1, we conclude that

σ P(s) | m ( s , t ) | dsC e ν t 0 + P(s) ( | M ( s , 0 ) | + | J ( s , 0 ) | ) ds.
(28)

Next, we control the small values of s, namely 0 σ |m(s,t)|ds. We cannot control this quantity directly and proceed to estimate 0 σ e μ s |m(s,t)|ds for μ given in condition (16).

We set v(s,t):= e μ s m(s,t). Then, for s[0,σ], v satisfies the equation

t v+ s v+(μ+p)v= 0 + p(u)K(s,u) e μ s m(u,t)du

and thus

t |v|+ s |v|+(μ+p)|v| 0 + p(u)K(s,u) e μ s | m ( u , t ) | du.

Integrating the above equation, using that m(0,t)=0 and (8), we obtain

d d t 0 σ | v ( t , s ) | d s + μ 0 σ | v ( t , s ) | d s 0 σ 0 + p ( u ) K ( s , u ) e μ s | m ( u , t ) | d u d s 0 σ p ( u ) | v ( u , t ) | d u 0 σ p ( u ) ( 0 u K ( s , u ) e μ ( u s ) d s 1 ) | v ( u , t ) | d u + p M σ + | m ( t , u ) | d u .

Therefore, using assumption (16) and estimate (28), there is a ν>0 such that

d d t 0 σ | v ( t , s ) | dsν 0 σ | v ( t , s ) | ds+C e λ t 0 P(s) ( | M ( s , 0 ) | + | J ( s , 0 ) | ) ds.

This concludes the proof of Proposition 3.1. □

3.2 Exponential Decay of M and t M

We now establish the assumptions in Proposition 3.1 on exponential decay for M and J. This is stated in the following proposition.

Proposition 3.2 There exist a constant λ<0 and a function P satisfying properties (25) such that the following estimates hold

0 + P ( s ) | M ( s , t ) | d s C e λ t 0 + P ( s ) | M ( s , 0 ) | d s , 0 + P ( s ) | J ( s , t ) | d s C e λ t 0 + P ( s ) | J ( s , 0 ) | d s ,

assuming the initial bounds (20) are satisfied.

Proof We divide the proof in two steps. We first derive a closed form for the equation on M(s,t); this is our main observation, which allows to extend the argument in [12] to nonconstant coefficients. Then, thanks to dual problem that we study in Appendix C, we conclude our proof.

Step 1. The equation on M(s,t). Integrating Eq. (1) once subtracted to the same equation for A, we find successively

M ( s , t ) t + M ( s , t ) s + 0 s p ( x ) M ( x , t ) x d x = x = 0 s K ( x , u ) p ( u ) M ( u , t ) u d u d x , M ( s , t ) t + M ( s , t ) s + p ( s ) M ( s , t ) = 0 s p ( x ) x M ( x , t ) d x x = 0 s u ( K ( x , u ) p ( u ) ) M ( u , t ) d u d x , M ( s , t ) t + M ( s , t ) s + p ( s ) M ( s , t ) = u = 0 s p ( u ) u M ( u , t ) d u u = 0 p ( u ) u M ( u , t ) x = 0 s K ( x , u ) d x d u x = 0 s K ( x , u ) u p ( u ) M ( u , t ) d u d x , M ( s , t ) t + M ( s , t ) s + p ( s ) M ( s , t ) = u = 0 s p ( u ) u ( 1 f ( s , u ) ) M ( u , t ) d u u = s p ( u ) u f ( s , u ) M ( u , t ) d u + Φ ( s , u ) p ( u ) M ( u , t ) d u .

As f(s,u)=1 for su, this equality reduces to

M ( s , t ) t + M ( s , t ) s + p ( s ) M ( s , t ) = u = s p ( u ) u f ( s , u ) M ( u , t ) d u + p ( u ) Φ ( s , u ) M ( u , t ) d u .
(29)

We can now insert the absolute values and find

| M ( s , t ) | t + | M ( s , t ) | s + p ( s ) | M ( s , t ) | u = s | p ( u ) | f ( s , u ) | M ( u , t ) | d u + p ( u ) Φ ( s , u ) | M ( u , t ) | d u .
(30)

There are to routes to go further. To cover the case of interest where p can vanish during the refractory period, we use a duality argument. However, we can use directly formula (30) under different assumptions on p; this is performed in Appendix B.

Step 2. End of the proof of Proposition  3.2 . By construction, we have M(0,t)=M(,t)=0. Therefore, we may multiply inequality (30) by P, using Lemma C.1 and assumption (20), we may integrate by parts and we find that there exists λ<0 such that

d d t | M ( s , t ) | P(s)dsλ | M ( s , t ) | P(s)ds,
(31)

which proves the decay of |M| in Proposition 3.2 thanks to the Gronwall lemma.

Because time only enters in Eq. (29) through M(s,t), we may differentiate in time and find that J still satisfies (29), therefore, the inequality (30) also holds for |J| and, since 0=J(0,t)=J(,t), we conclude as before that

d d t | J ( s , t ) | P(s)dsλ | J ( s , t ) | P(s)ds

which concludes the proof of Proposition 3.2. □

4 Exponential Decay for the Nonlinear Case

The proofs of Theorems 2.2 and 2.3 follow the strategy used to prove Theorem 2.1. The main difficulty is that the control on M is a weak control on m while the nonlinear term, which involves N(t), is in a strong dependency. To solve this difficulty, we had to assume that the function p is regular enough; this allows us, via an integration by parts to increase the regularity of the nonlinear term N(t) at the expense of Lipschitz regularity on p.

Step 1. Proof of ( 21 ) and ( 22 ). Let n be the solution of Eq. (1) and A be the stationary state in (3). Then m:=nA is solution of the equation

{ m ( s , t ) t + m ( s , t ) s + p ( s , A ) m ( s , t ) = K ( s , u ) p ( u , A ) m ( u , t ) d u + [ p ( s , A ) p ( s , N ( t ) ) ] n ( s , t ) K ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] n ( u , t ) d u , s 0 , t 0 , m ( 0 , t ) = 0 , m ( s , 0 ) = n 0 ( s ) A ( s ) , 0 m 0 ( s ) d s = 0 , A : = p ( s , A ) A ( s ) d s .

Following the calculation in the linear case, the function M(s,t):= 0 s m(u,t)du is solution of the equation:

{ M ( s , t ) t + M ( s , t ) s + p ( s , A ) M ( s , t ) = u = s p ( u , A ) u f ( s , u ) M ( u , t ) d u + Φ ( s , u ) p ( u , A ) M ( u , t ) d u + 0 s [ p ( u , A ) p ( u , N ( t ) ) ] n ( u , t ) d u f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] n ( u , t ) d u .
(32)

We introduce the remainder term

R ( s , t ) : = 0 s [ p ( u , A ) p ( u , N ( t ) ) ] n ( u , t ) d u f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] n ( u , t ) d u

which can be written using (11) as

R(s,t)= s + f(s,u) [ p ( u , A ) p ( u , N ( t ) ) ] n(u,t)du.

Using assumptions (7) and (5), we find that

| R ( s , t ) | { 0 for  s σ + : = max σ ( ) , 2 η | A N ( t ) | else.
(33)

As before, we may enter the absolute values and find

| M ( s , t ) | t + | M ( s , t ) | s + p ( s , A ) | M ( s , t ) | u = s [ | p ( u , A ) u | f ( s , u ) + Φ ( s , u ) p ( u , A ) ] | M ( u , t ) | d u + | R ( s , t ) | .

Multiplying by P the equation obtained for M, where P is as in Lemma C.1, we conclude that there is λ<0 (and close to 0) such that

d d t P(s) | M ( s , t ) | dsλP(s) | M ( s , t ) | ds+2η P L ( 0 , σ + ) | A N ( t ) | .
(34)

To proceed further, using the notation (17) and M(0,t)=0, we write

A N ( t ) = 0 [ p ( s , A ) A ( s ) p ( s , N ( t ) ) n ( s , t ) ] d s = 0 p ( s , A ) m ( s , t ) d s + 0 [ p ( s , A ) p ( s , N ( t ) ) ] n ( s , t ) d s = 0 s p ( s , A ) M ( s , t ) d s + 0 [ p ( s , A ) p ( s , N ( t ) ) ] n ( s , t ) d s

from which we conclude

| A N ( t ) | 1 inf s 0 P s p L ( [ 0 , ) × [ 0 , ) ) P(s) | M ( s , t ) | ds+η | N ( t ) A |

and finally

| A N ( t ) | 1 ( 1 η ) inf s 0 P s p L ( [ 0 , ) × [ 0 , ) ) P(s) | M ( s , t ) | ds.
(35)

Since η>0 is supposed small enough, we may insert this estimate in (34) and we deduce that there exists ν>0 such that

d d t P(s) | M ( s , t ) | dsνP(s) | M ( s , t ) | ds

which proves the inequality (21) using the Gronwall lemma.

Inserting this exponential decay on P(s)|M(s,t)|ds in (35) proves (22).

Step 2. Proof of ( 23 ). In order to better explain our strategy, we begin with a global Lipschitz estimate on N and then prove the exponential decay.

From the definition of N, we have

N (t)= 0 + N (t) N p ( s , N ( t ) ) n(s,t)ds+ 0 + p ( s , N ( t ) ) t n(s,t)ds.

Using (2), and (7), we obtain

(1η) | N ( t ) | | 0 + p ( s , N ( t ) ) t n(s,t)ds|.
(36)

We first prove a Lipschitz estimate. We write

0 + p ( s , N ( t ) ) t n ( s , t ) d s = 0 + p ( s , N ( t ) ) s n ( s , t ) d s 0 + p 2 ( s , N ( t ) ) n ( s , t ) d s + 0 + p ( s , N ( t ) ) 0 + K ( s , u ) p ( u , N ( t ) ) n ( u , t ) d u d s .

Integration by parts implies that

0 + p ( s , N ( t ) ) s n(s,t)ds= 0 + s p ( s , N ( t ) ) n(s,t)ds.

The Lipschitz bound on p and (2) gives the estimate | N |C.

We now prove estimate (23) based on inequality (36). We recall the notations m=nA= s M(s,t) and conclude

0 + p ( s , N ( t ) ) t n ( s , t ) d s = 0 + p ( s , N ( t ) ) t m ( s , t ) d s = 0 + s p ( s , N ( t ) ) t M ( s , t ) d s .

Because M satisfies the closed equation (32), we deduce that

0 + s p ( s , N ( t ) ) t M ( s , t ) d s = 0 + s p ( s , N ( t ) ) s M ( s , t ) d s + 0 + s p ( s , N ( t ) ) p ( s , A ) M ( s , t ) d s 0 + s p ( s , N ( t ) ) u = s p ( u , A ) u f ( s , u ) M ( u , t ) d u d s + 0 + s p ( s , N ( t ) ) Φ ( s , u ) p ( u , A ) M ( u , t ) d u d s + 0 + s p ( s , N ( t ) ) R ( s , t ) d s .

To continue, we control each term by quantities as M(s,t) for which we have already proved exponential decay

| 0 + s p ( s , N ( t ) ) s M ( s , t ) d s | = | 0 + s 2 p ( s , N ( t ) ) M ( s , t ) d s | C 0 + | M ( s , t ) | d s , | 0 + s p ( s , N ( t ) ) p ( s , A ) M ( s , t ) d s | C 0 + | M ( s , t ) | d s .

Using assumption (5)

| 0 + s p ( s , N ( t ) ) u = s p ( u , A ) u f(s,u)M(u,t)duds|C 0 + | M ( s , t ) | ds

because these integrals vanish for s> σ + and u> σ + . Next, assumption (12) gives

| 0 + s p ( s , N ( t ) ) Φ(s,u)p ( u , A ) M(u,t)duds|C 0 + | M ( s , t ) | ds.

Finally, estimate (33) and Theorem 2.2 imply that there exist C and ν>0 such that

0 + | R ( s , t ) | dsC e ν t .

All these terms have exponential decay, and thus, back to inequality (36), this concludes the proof of (23).

The proof of Theorem 2.2 is now complete.  □

Step 3. Proof Theorem  2.3 . Thanks to (32), the function t M(s,t):=J(s,t) satisfies the equation

J ( s , t ) t + J ( s , t ) s + p ( s , A ) J ( s , t ) = u = s p ( u , A ) u f ( s , u ) J ( u , t ) d u + Φ ( s , u ) p ( u , A ) J ( u , t ) d u + R ˜ ( s , t )
(37)

with

{ R ˜ ( s , t ) 0 for  s σ + , R ˜ ( s , t ) = s σ + f ( s , u ) N ( t ) N p ( u , N ( t ) ) n ( u , t ) d u R ˜ ( s , t ) = s σ + f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] t n ( u , t ) d u for  s σ + .

Inserting absolute values in Eq. (37) and multiplying by P built in Lemma C.1, we obtain that there exists λ>0 such that

d d t P(x) | J ( x , t ) | dxλP(x) | J ( x , t ) | dx+ 0 + P(x) | R ˜ ( x , t ) | dx.

We claim that there exist C>0, ν>0 such that

0 + P(x) | R ˜ ( x , t ) | dx= 0 σ + P(x) | R ˜ ( x , t ) | dxC e ν t .
(38)

To prove it, we proceed in estimating each term of R ˜ . For s σ + , we control the first term as

s σ + P ( s ) f ( s , u ) | N ( t ) N p ( u , N ( t ) ) | n ( u , t ) d u | N ( t ) | N p L ( [ 0 , ) × [ 0 , ) ) P L [ 0 , σ + ] C e ν t
(39)

thanks to the exponential decay on N in (23).

The second contribution to R ˜ , namely

s = 0 σ + | s σ + f(s,u) [ p ( u , A ) p ( u , N ( t ) ) ] t n(u,t)du|ds

is longer to estimate. Using the equation on n, it is the sum of three integrals

s = 0 σ + | s σ + f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] u n ( u , t ) d u | d s + s = 0 σ + | s σ + f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] p ( u , N ( t ) ) n ( u , t ) d u | d s + s = 0 σ + | s σ + f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] × 0 + K ( u , w ) p ( w , N ( t ) ) n ( w , t ) d w d u | d s .

Using assumption (7) and estimate (22), the last two integrals are controlled by

C | A N ( t ) | C e ν t .

It remains to estimate the first integral. We integrate by parts and write

s σ + f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] u n ( u , t ) d u = f ( s , s ) [ p ( s , A ) p ( s , N ( t ) ) ] n ( s , t ) + s σ + Φ ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] n ( u , t ) d u s σ + f ( s , u ) u [ p ( u , A ) p ( u , N ( t ) ) ] n ( u , t ) d u .

Integrating the above equality between 0 and σ + and using assumptions (11), (12), (6), (4), (7), we obtain that there exists a constant C such that

s = 0 σ + | s σ + f ( s , u ) [ p ( u , A ) p ( u , N ( t ) ) ] u n ( u , t ) d u | d s C | A N ( t ) | C e ν t

and the last inequality is again (22). We deduce that

s = 0 σ + | s σ + f(s,u) [ p ( u , A ) p ( u , N ( t ) ) ] t n(u,t)du|dsC e ν t .
(40)

Putting the estimates (39) and (40) together, we conclude the proof of estimate (38).

Therefore, we conclude that

d d t P(x) | J ( x , t ) | dxλP(x) | J ( x , t ) | dx+C e ν t

and thus there exists constants C and ν>0 such that

0 + P(x) | J ( t , x ) | dxC e ν t .

Using the same computations than in the proof of Proposition 3.1 and using the bound (33) on R, we obtain Theorem 2.3. □

5 Existence of a Stationary State

This section is devoted to the proof of existence and uniqueness of stationary states for Eq. (1) in the case where the network is weakly connected. We begin with the linear case and then we treat the weakly nonlinear case that is weakly connected networks.

5.1 The Linear Case

The following theorem holds.

Theorem 5.1 (Stationary states. Linear case)

Assume that K satisfies assumptions (8), (12), and

0p(s) p M ,p(s) p >0for s s .
(41)

Then there exists a unique solution of the equation

{ A ( s ) s + p ( s ) A ( s ) = K ( s , u ) p ( u ) A ( u ) d u , s 0 , A ( 0 ) = 0 , 0 A ( s ) d s = 1 .
(42)

The proof is standard [8, 14, 18, 19] and based on the Krein–Rutman theorem (see [7]).

Proof of Theorem 5.1 To justify the computations in this proof, we restrict ourselves to the case when p and K are continuous, which allows us to use a consequence of the Krein–Rutman theorem as recalled in Theorem D.1; this is not a restriction because the extension to our assumptions in Sect. 1 follows from standard regularization argument and passing to the limit as we do it below.

So, as to ensure positivity and compactness, we introduce two truncation parameters ε>0 small enough and R>0 large enough. According to Theorem D.1, there is λ ε , R R and A ε , R C 1 ([0,R]) such that

{ A ε , R ( s ) s + ( p ( s ) + λ ε , R ) A ε , R ( s ) = 0 R K ( s , u ) p ( u ) A ε , R ( u ) d u , 0 s R , A ε , R ( 0 ) = ε , A ε , R ( s ) > 0 , 0 R A ε , R ( s ) d s = 1 .
(43)

To prove Theorem 5.1, we need to pass to the limit in Eq. (43) when ε and R 1 go to 0. To do this, it is enough to prove compactness for the eigenvalues λ ε , R and convergence for the functions A ε , R to a function A satisfying properties of Theorem 5.1.

We begin with the following a priori estimates

Lemma 5.2 For all ε>0 and R>0 large enough, the following two estimates hold:

ε 2 R λ ε , R ε , ( 1 θ ) 0 R s A ε , R ( s ) d s 1 p + s ( 1 θ ) ,
(44)

where θ is defined in (12) and s , p are defined in (41).

Proof Integrating Eq. (43) between 0 and x with respect to the variable s, we obtain that

λ ε , R 0 x A ε , R ( s ) d s = ε A ε , R ( x ) 0 x p ( u ) A ε , R ( u ) d u + 0 R f ( x , u ) p ( u ) A ε , R ( u ) d u .
(45)

Choosing x=R and thanks to assumption (8), we find the upper bound on λ ε , R

λ ε , R =ε A ε , R (R)ε.
(46)

Next, we multiply Eq. (43) by s and integrate between 0 and R. We find thanks to (13)

1 + R A ε , R ( R ) + λ ε , R 0 R s A ε , R ( s ) d s + 0 R s p ( s ) A ε , R ( s ) d s 0 R 0 R s K ( s , u ) p ( u ) A ε , R ( u ) d u θ 0 R u p ( u ) A ε , R ( u ) d u ,

which we can also write, using the above expression of λ ε , R , as

( 1 θ ) 0 R s p ( s ) A ε , R ( s ) d s + ε 0 R s A ε , R ( s ) d s 1 + A ε , R ( R ) [ 0 R s A ε , R ( s ) d s R ] 1 .

Using (41), we conclude that

(1θ) 0 R s A ε , R (s)ds 1 p + s (1θ),

which concludes the proof of the second estimate of Lemma 5.2. And this estimate, in the previous identity also gives

[ R s 1 p ( 1 θ ) ] A ε , R (R)1.

And for R large enough, we obtain

A ε , R (R) 2 R .

The formula (46) concludes the lower estimate for λ ε , R and the proof of the Lemma 5.2. □

We continue our a priori estimates with the following lemma.

Lemma 5.3 For ε>0 small enough and R>0 large enough, we have

A ε , R L ( [ 0 , ) ) p M +2ε+ 2 R , s A ε , R L 1 ( [ 0 , ) ) p M +ε+ 2 R .

Proof From Eq. (45), we have

A ε , R ( x ) ε λ ε , R 0 x A ε , R ( s ) d s + 0 R f ( x , u ) p ( u ) A ε , R ( u ) d u ε + | λ ε , R | + p L ( [ 0 , ) )

and, from the estimate in Lemma 5.2, we deduce the L bound on A ε , R in Lemma 5.3. The other bound follows directly from the equation. □

Using Lemma 5.3, we may extract from A ε , R a subsequence which converges locally strongly to a function A L ([0,)). Moreover, using the first estimate of Lemma 5.2, we obtain that A still satisfies

0 + A(s)ds=1.

To conclude the existence proof of Theorem 5.1, we pass to the limit in the weak form of (43) as ε0, R.

Uniqueness is a standard property in Krein–Rutman theory and we refer to [18] for the particular example at hand.  □

5.2 The Nonlinear Case

Theorem 5.4 Assume (4)(8), (12). Then there is a steady solution A(s) to (1).

Proof For a given N0, we know from the previous subsection that there is a solution A(s,N) to

{ s A ¯ ( s , N ) + p ( s , N ) A ¯ ( s , N ) = K ( s , u ) p ( u , N ) A ¯ ( u , N ) d u , s 0 , A ¯ ( 0 , N ) = 0 , A ¯ ( s , N ) 0 , 0 A ¯ ( s , N ) d s = 1 .

A stationary state for the nonlinear equation is a fixed point to the mapping

N=F(N):= 0 + p(s,N) A ¯ (s,N)ds.

The following properties hold true by general continuity properties (that themselves follow from the bounds proved for the linear equation and the uniqueness of the solution to (42))

F  is continuous , F ( 0 ) > 0 , F ( N ) p M 0 + A ¯ ( s , N ) d s = p M .

Therefore, there is at least one steady state. □

Notice that uniqueness is expected with the smallness assumptions (7) because | F (N)| should be small (here, we leave this point without proof).

6 Numerical Simulations and Spontaneous Activity

When our smallness assumptions is not fulfilled, the neurons may undergo synchronization leading to a spontaneous activity of the network. The aim of this section is to illustrate this regime through numerical simulations and show the effect of the fragmentation term when the flux of neurons N(t) does not converge to a stationary state but oscillates.

To do so, we compare the dynamic of N in the two following “extreme” cases

  • K(s,u)= δ s = 0

    which is the case studied in [17] where all the neurons come back in a same state after discharge,

  • K(s,u)= δ s = u / 2

    when the neurons, after discharge, reach a state which is proportional to their time elapsed since discharge.

Then we choose the discharge rate p as in the article [17], which allows us to obtain theoretically and numerically periodic solutions in the particular case where K(s,u)= δ s = 0 , that is,

p ( s , N ( t ) ) := I s σ ( N ( t ) )
(47)

with a decreasing function σ built as follows: for α>0, we define the two functions

0< N (α):= 1 2 e α 1 < N + (α):= e α 2 e α 1 <1,
(48)

and we choose the Lipschitz continuous discharge threshold σ as

σ(x)={ 2 α on  [ 0 , N ( α ) ] , 2 α ln ( x ) + ln ( N ( α ) ) on  [ N ( α ) , N + ( α ) ] , α on  [ N + ( α ) , ) .
(49)

We now compare numerical simulations of the dynamic of N with the two kernels mentioned above.

  • For K= δ s = 0 , with this choice of function p given by (47), theoretical study and numerical results in [17] have shown that for all α>0, there exists a very large class of periodic solutions, Fig. 1 (left) depicts such a periodic solution. Moreover, with the numerical observations, the dynamic strongly depends on the initial data (see Fig. 2 and article [17]).

Fig. 1
figure 1

Total neural activity N(t) computed with α=2 and an initial data as e s . Left: K(s,u)= δ s = 0 . Right: K(s,u)= δ s = u / 2 . The continuous lines give the values N and N + and in the figure on the right, N(t) is the top line

Fig. 2
figure 2

Total neural activity N(t) computed with α=4 and K(s,u)= δ s = 0 . Left: initial data as e s . Right: a more complex initial data (as in Proposition 2 of [17]). We see that two different periodic solutions occur with those two different initial data. The continuous lines give the values N and N +

  • The kernel K(s,u)= δ s = u / 2 seems to create a “smoothing effect.” Indeed, unlike the former case, when α is small enough, the numerical solution N converges to a stationary state (see Fig. 1, right). Moreover, for α fixed large enough, it seems that there does not exist a large spectrum of periodic solutions as in the case where K(s,u)= δ s = 0 ; more precisely, we numerically obtain only one periodic solution (see Figs. 3 and 4). The numerical methods used here are analogous of those in article [17]; hence, we refer to this article for a description of the algorithm.

Fig. 3
figure 3

Total neural activity N(t) computed with α=4 and K(s,u)= δ s = u / 2 . Left: initial data as e s . Right: a more complex initial data (as in Proposition 2 of [17]). We observe that the two functions N are the same. The continuous lines give the values N and N +

Fig. 4
figure 4

Total neural activity N(t) computed with α=3 and K(s,u)= δ s = u / 2 . Left: an initial data as e s . Right: a more complex initial data (as in Proposition 2 of [17]). We observe that the two functions N are the same. The continuous lines give the values N and N +

Appendix A: An Example of Coefficients p and K

The assumptions in Sect. 2.1 are abstract and we can explain their meaning for the particular case of the introduction that is, for two functions σ:[0,)[ σ , σ + ] and ψ:[0,)[0,),

p(s,N)= H δ ( s > σ ( N ) ) ,K(s,u)=δ ( s ψ ( u ) ) .

One can readily compute that

f ( s , u ) = 1 { s > ψ ( u ) } , Φ ( s , u ) = ψ ( u ) δ ( s ψ ( u ) ) , 0 Φ ( s , u ) d s = ψ ( u ) , 0 u s K ( s , u ) d s = ψ ( u ) .

Therefore, the assumption (12) is reduced to write

0 ψ (u)θ<1,ψ(u)θu<u.

Finally the conditions (15) and (16) are reduced to saying that σ + and θ are small enough.

Appendix B: A Direct Use of the Fundamental Formula on |M|

As mentioned earlier, in order to treat nonconstant coefficients, our fundamental new idea is to use the formulas (29), (30) on the integral M(s,t) which vanishes at s=0 and s=. We recall the later

| M ( s , t ) | t + | M ( s , t ) | s + p ( s ) | M ( s , t ) | u = s [ | p ( u ) | f ( s , u ) + p ( u ) Φ ( s , u ) ] | M ( u , t ) | d u .
(50)

We can give a direct proof of exponential decay for M based on this expression and using simpler assumptions on p and K that, however, do not cover the case when p can have large variation and p(s) can vanish for s0. It is a natural extension of the case when p is constant as covered in [12].

We use the definition, compatible with the example in Appendix A

ψ ̲ (u)= 0 u f(s,u)ds=uψ(u),1θ ψ ̲ (u)1.

We are going to prove the following proposition.

Proposition B.1 With the assumptions (8), (10), (12), if there exists ν>0 such that

| p ( u ) | ψ ̲ (u)+p(u) ψ ̲ (u)>ν,

then one has

0 | M ( s , t ) | ds 0 | M 0 ( s ) | ds e ν t .

Proof We may integrate Eq. (50) and write, using the Fubini theorem

d d t 0 | M ( s , t ) | d s + 0 p ( s ) | M ( s , t ) | d s 0 0 1 { 0 s u } [ | p ( u ) | f ( s , u ) p ( u ) u f ( s , u ) ] | M ( u , t ) | d u d s 0 [ | p ( u ) | ψ ̲ ( u ) p ( u ) ψ ̲ ( u ) + p ( u ) ] | M ( u , t ) | d u .

Therefore, we arrive at

d d t 0 | M ( s , t ) | d s = 0 [ | p ( u ) | ψ ̲ ( u ) p ( u ) ψ ̲ ( u ) ] | M ( u , t ) | d u ν 0 | M ( u , t ) | d u .

 □

Appendix C: A Noncompact Eigenproblem

The proof of convergence to a steady state with an exponential decay rate relies on the existence of an eigenpair (λ<0, P>0) solution to

P ( s ) s + ( λ + p ( s ) ) P(s)= 0 s [ | p ( s ) | f ( u , s ) + p ( s ) Φ ( u , s ) ] P(u)du.
(51)

Its solution uses in a fundamental way the smallness assumptions of Sect. 2.1 linking p and K. Indeed, the following lemma holds.

Lemma C.1 With the assumptions (4), (5), (8), (10), (12), (15)—on p (s)=p(s, A ) rather than p—and on K for λ<0 close enough to 0 there is a unique solution P to Eq. (51) with P(0)=1 and there exist a constant B>0 such that

1 B P(x)BP(y)for 0xy.

Proof Equation (51) is a delay differential equation, and thus has a global solution, i.e., for s0, for all λ. We consider a time interval [0, s 0 ) where the solution P is positive and we prove that for λ>0 close enough to zero we can take s 0 =. We argue for λ=0 and then by continuity for λ close enough to 0.

The solution to Eq. (51) with λ=0 satisfies

P(s) e 0 s p ( u ) d u for 0s s 0 .
(52)

Therefore, integrating (51), we have for smin( s 0 , σ ),

P(s)1 e 0 s p ( u ) d u ( σ 0 s | p ( u ) | d u + θ 0 s p ( u ) d u ) 1 B >0
(53)

thanks to assumption (15). In particular, we conclude that s 0 > σ .

At this stage, we may argue by continuity and for λ>0 close enough to 0, we still have

P ( s ) e 0 s p ( u ) d u + O ( λ ) , P ( s ) 1 B + O ( λ ) > 0 for  0 s σ .
(54)

For s> σ , we write (51) as

P (s)=P(s) [ p M + λ 0 s Φ ( u , s ) P ( u ) P ( s ) d u ] .

We are going to prove that P is increasing for s> σ , that is the bracket is positive. This is because we can write

0 s Φ(u,s) P ( u ) P ( s ) du 0 σ Φ(u,s)du P L ( 0 , σ ) inf s ( 0 , σ ) P ( s ) + σ s Φ(u,s)du.

Estimates (54) on P and assumption (15), tell us that

0 σ Φ(u,s)du P L ( 0 , σ ) inf s ( 0 , σ ) P ( s ) + σ s Φ(u,s)du p M .

Therefore, we obtain that for λ<0 close enough to 0 then the bracket is positive and P is increasing on [ σ ,+) which proves Lemma C.1. □

Appendix D: A Consequence of the Krein–Rutman Theorem

To prove the existence of steady states, we have used the existence of a solution to a regularized eigenfunction problem. Namely, we have the

Theorem D.1 ([7, 18])

Let R>0, E= C 0 ([0,R]) and let

0B()E,0b(,) C 0 ( [ 0 , R ] × [ 0 , R ] ) ,

and let ε>0 be small enough. Then there is a unique λR, A C 1 ([0,R]), solution of the equation

{ A ( s ) s + ( B ( s ) + λ ) A ( s ) = 0 R b ( s , u ) A ( u ) d u , 0 s R , A ( 0 ) = ε 0 R A ( y ) d y , A ( s ) > 0 , 0 R A ( s ) d s = 1 .

References

  1. Balagué D, Cañizo JA, Gabriel P: Fine asymptotics of profiles and relaxation to equilibrium for growth-fragmentation equations with variable drift rates. Preprint; 2012.

    Google Scholar 

  2. Bansaye V, Tran VC: Branching Feller diffusion for cell division with parasite infection. ALEA Lat Am J Probab Math Stat 2011, 8: 95–127.

    MATH  MathSciNet  Google Scholar 

  3. Bardet J-B, Christen A, Guillin A, Malrieu F, Zitt P-A: Total variation estimates for the TCP process. Preprint; 2011.

    Google Scholar 

  4. Cáceres MJ, Cañizo JA, Mischler S:Rate of convergence to self-similarity for the fragmentation equation in L 1 spaces. Commun Appl Ind Math 2011, 1(2):299–308.

    Google Scholar 

  5. Cáceres MJ, Cañizo JA, Mischler S: Rate of convergence to an asymptotic profile for the self-similar fragmentation and growth-fragmentation equations. J Math Pures Appl 2011, 96(4):334–362. 10.1016/j.matpur.2011.01.003

    Article  MATH  MathSciNet  Google Scholar 

  6. Calvez V, Lenuzza N, Doumic M, Deslys J-P, Mouthon F, Perthame B: Prion dynamic with size dependency—strain phenomena. J Biol Dyn 2010, 4(1):28–42. 10.1080/17513750902935208

    Article  MathSciNet  Google Scholar 

  7. Dautray R, Lions J-L: Mathematical Analysis and Numerical Methods for Sciences and Technology. Springer, Berlin; 1990.

    Book  MATH  Google Scholar 

  8. Doumic Jauffret M, Gabriel P: Eigenelements of a general aggregation-fragmentation model. Math Models Methods Appl Sci 2010, 20(5):757–783. 10.1142/S021820251000443X

    Article  MATH  MathSciNet  Google Scholar 

  9. Engler H, Pruss J, Webb GF: Analysis of a model for the dynamics of prions II. J Math Anal Appl 2006, 324: 98–117. 10.1016/j.jmaa.2005.11.021

    Article  MATH  MathSciNet  Google Scholar 

  10. Farkas JZ, Hagen T: Stability and regularity results for a size-structured population model. J Math Anal Appl 2007, 328(1):119–136. 10.1016/j.jmaa.2006.05.032

    Article  MATH  MathSciNet  Google Scholar 

  11. arXiv: http://arxiv.org/abs/1102.2871 Gabriel P: Long-time asymptotics for nonlinear growth-fragmentation equations. arXiv:1102.2871; 2011.

  12. Laurençot P, Perthame B: Exponential decay for the growth-fragmentation/cell-division equations. Commun Math Sci 2009, 7(2):503–510. 10.4310/CMS.2009.v7.n2.a12

    Article  MATH  MathSciNet  Google Scholar 

  13. Laurençot P, Walker C: Well-posedness for a model of prion proliferation dynamics. J Evol Equ 2007, 7: 241–264. 10.1007/s00028-006-0279-2

    Article  MATH  MathSciNet  Google Scholar 

  14. Michel P: Existence of a solution to the cell division eigenproblem. Math Models Methods Appl Sci 2006, 16(7):1125–1153.

    Article  MATH  MathSciNet  Google Scholar 

  15. Michel P, Mischler S, Perthame B: General relative entropy inequality: an illustration on growth models. J Math Pures Appl 2005, 84(9):1235–1260. 10.1016/j.matpur.2005.04.001

    Article  MATH  MathSciNet  Google Scholar 

  16. Pakdaman K, Perthame B, Salort D: Dynamics of a structured neuron population. Nonlinearity 2010, 23: 55–75. 10.1088/0951-7715/23/1/003

    Article  MATH  MathSciNet  Google Scholar 

  17. Pakdaman K, Perthame B, Salort D: Relaxation and self-sustained oscillations in the time elapsed neuron network model. SIAM J Appl Math 2013, 73(3):1260–1279. 10.1137/110847962

    Article  MATH  MathSciNet  Google Scholar 

  18. Perthame B Frontiers in Mathematics. In Transport Equations in Biology. Birkhäuser, Basel; 2007.

    Google Scholar 

  19. Perthame B, Ryzhik L: Exponential decay for the fragmentation or cell-division equation. J Differ Equ 2005, 210: 155–177. 10.1016/j.jde.2004.10.018

    Article  MATH  MathSciNet  Google Scholar 

  20. Simonett G, Walker C: On the solvability of a mathematical model for prion proliferation. J Math Anal Appl 2006, 324(1):580–603. 10.1016/j.jmaa.2005.12.036

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Khashayar Pakdaman.

Additional information

Competing Interests

The authors declare that they have no competing interests.

Authors’ Contributions

The authors have equally contributed to this paper.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pakdaman, K., Perthame, B. & Salort, D. Adaptation and Fatigue Model for Neuron Networks and Large Time Asymptotics in a Nonlinear Fragmentation Equation. J. Math. Neurosc. 4, 14 (2014). https://doi.org/10.1186/2190-8567-4-14

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-4-14

Keywords