Skip to main content

Large Deviations for Nonlocal Stochastic Neural Fields

Abstract

We study the effect of additive noise on integro-differential neural field equations. In particular, we analyze an Amari-type model driven by a Q-Wiener process, and focus on noise-induced transitions and escape. We argue that proving a sharp Kramers’ law for neural fields poses substantial difficulties, but that one may transfer techniques from stochastic partial differential equations to establish a large deviation principle (LDP). Then we demonstrate that an efficient finite-dimensional approximation of the stochastic neural field equation can be achieved using a Galerkin method and that the resulting finite-dimensional rate function for the LDP can have a multiscale structure in certain cases. These results form the starting point for an efficient practical computation of the LDP. Our approach also provides the technical basis for further rigorous study of noise-induced transitions in neural fields based on Galerkin approximations.

Mathematics Subject Classification (2000):60F10, 60H15, 65M60, 92C20.

1 Introduction

Starting from the classical works of Wilson/Cowan [64] and Amari [1], there has been considerable interest in the analysis of spatiotemporal dynamics of mesoscale models of neural activity. Continuum models for neural fields often take the form of nonlinear integro-differential equations where the integral term can be viewed as a nonlocal interaction term; see [37] for a derivation of neural field models. Stationary states, traveling waves, and pattern formation for neural fields have been studied extensively; see, e.g., [20, 29] or the recent review by Bressloff [14] and references therein.

In this paper, we are going to study a stochastic neural field model. There are several motivations for our approach. In general, it is well known that intra and interneuron [27] dynamics are subject to fluctuations. Many meso or macroscale continuum models have stochastic perturbations due to finite-size effects [38, 61]. Therefore, there is certainly a genuine need to develop new techniques to analyze random neural systems [50]. For stochastic neural fields, there is also the direct motivation to understand the relation between noise and short-term working memory [52] as well as noise-induced phenomena [54] in perceptual bistability [62]. Although an eventual goal is to match results from stochastic neural fields to actual cortex data [35], we shall not attempt such a comparison here. However, the techniques we develop could have the potential to make it easier to understand the relation between models and experiments; see Sect. 10 for a more detailed discussion.

There is a relatively small amount of fairly recent work on stochastic neural fields, which we briefly review here. Brackley and Turner [11] study a neural field with a gain function, which has a random firing threshold. Fluctuating gain functions are also considered by Coombes et al. [22]. Bressloff and Webber [15] analyze a stochastic neural field equation with multiplicative noise while Bressloff and Wilkinson [16] study the influence of extrinsic noise on neural fields. In all these works, the focus is on the statistics of traveling waves such as front diffusion and the effects of noise on the wave speed. Hutt et al. [41] study the influence of external fluctuations on Turing bifurcation in neural fields. Kilpatrick and Ermentrout [43] are interested in stationary bump solutions. They observe numerically a noise-induced passage to extinction as well as noise-induced switching of bump solutions and conjecture that “a Kramers’ escape rate calculation” [[43], p. 16] could be applied to stochastic neural fields, but they do not carry out this calculation. In particular, the question is whether one can give a precise estimate of the mean transition time between metastable states for stochastic neural field equations; for a precise statement of the classical Kramers’ law; see Sect. 5, Eq. (32). However, to the best of our knowledge, there seems to be no general Kramers’ law or large deviation principle (LDP) calculation available for continuum neural field models although large deviations have been of recent interest in neuroscience applications [13, 33]. It is one of the main goals of this paper to provide the basic steps toward a general theory.

Although Kramers’ law [5] and LDPs [26, 34] are well understood for finite-dimensional stochastic differential equations (SDEs), the work for infinite-dimensional evolution equations is much more recent. In particular, it has been shown very recently that one may extend Kramers’ law to certain stochastic partial differential equations (SPDEs) [4, 6, 7] driven by space-time white noise. The work of Berglund and Gentz [7] provides a quite general strategy how to “lift” a finite-dimensional Kramers’ law to the SPDE setting using a Galerkin approximation due to Blömker and Jentzen [8]. Since the transfer of PDE techniques to neural fields has been very successful, either directly [51] or indirectly [14, 21], one may conjecture that the same strategy also works for SPDEs and stochastic neural fields.

In this paper, we consider a rate-based (or Amari) neural field model driven by a Q-Wiener process W

d U t (x)= [ α U t ( x ) + B w ( x , y ) f ( U t ( y ) ) d y ] dt+ϵd W t (x)
(1)

for a trace-class operator Q, nonlinear gain function f, and an interaction kernel w; the technical details and definitions are provided in Sect. 2. Observe that (1) is a relatively general formulation of a nonlocal neural field. Hence, we expect that the techniques developed in this paper carry over to much wider classes of neural fields beyond (1) such as activity-based models.

Remark 1.1 To avoid confusion, we alert readers familiar with neural fields that the nonlinear gain function f in (1) is sometimes also called a “rate function.” However, we reserve “rate function” for a functional, to be denoted later by I, arising in the context of an LDP as this convention is standard in the context of LDPs.

Our main goal in the study of (1) is to provide estimates on the mean first passage times between metastable states. In particular, we develop the basic analytical tools to approximate equation (1) as well as its rate function using a finite-dimensional Galerkin approximation. By making the rate function as explicit as possible, we do not only provide a starting point for further analytical work, but also provide a framework for efficient numerical methods to analyze metastable states.

The paper is structured as follows: The motivation for (1) is given in Sect. 3 where a formal calculation shows that a space-time white noise perturbation of the gain function in a deterministic neural field leads to (1). In Sect. 4, we briefly describe important features of the deterministic dynamics for (1) where ϵ=0. In particular, we collect several examples from the literature where the classical Kramers’ stability configuration of bistable stationary states separated by an unstable state occurs for Amari-type neural fields. In Sect. 5, we introduce the notation for Kramers’ law and LDPs and state the main theorem on finite-dimensional rate functions. In Sect. 6, we argue that a direct approach to Kramers’ law via “lifting” for (1) is likely to fail. Although the Amari model has a hidden energy-type structure, we have not been able to generalize the gradient-structure approach for SPDEs to the stochastic Amari model. This raises doubt whether a Kramers’ escape rate calculation can actually be carried out, i.e., whether one may express the prefactor of the mean first-passage in the bistable case explicitly. Based on these considerations, we restrict ourselves to just derive an LDP. In Sect. 7, the LDP is established by a direct transfer of a result known for SPDEs. The disadvantage of this approach is that the resulting rate function is difficult to calculate, analytically or numerically, in practice. Therefore, we establish in Sect. 8 the convergence of a suitable Galerkin approximation for (1). Using this approximation, one may apply results about the LDP for SDEs, which we carry out in Sect. 9. In this context, we also notice that the trace-class noise can induce a multiscale structure of the rate function in certain cases. The last two observations lead to a tractable finite-dimensional approximation of the LDP and hence also an associated finite-dimensional approximation for first-exit time problems. We conclude the paper in Sect. 10 with implications of our work and remarks about future problems.

2 Amari-Type Models

In this study, we consider stochastic neural field models with additive noise of the form

d U t (x)= [ α U t ( x ) + B w ( x , y ) f ( U t ( y ) ) d y ] dt+ϵd W t (x)
(2)

for xB R d , a small parameter ϵ>0, and t0, where is bounded and closed. In (2) the solution U models the averaged electrical potential generated by neurons at location x in an area of the brain . Neural field equations of the form (2) are called Amari-type equations or a rate-based neural field models. The equation is driven by an adapted space-time stochastic process W t (x) on a filtered probability space (Ω,F, ( F t ) t 0 ,P). The precise definition of the process W will be given below.

The parameter α>0 is the decay rate for the potential, w:B×BR is a kernel that models the connectivity of neurons at location x to neurons at location y. Positive values of w model excitatory connections and negative values model inhibitory connections. The gain function f:R R + relates the potential of neurons to inputs into other neurons. Typically, the gain functions are chosen sigmoidal, for example, (up to affine transformations of the argument) f(u)= ( 1 + e u ) 1 or f(u)=(tanh(u)+1)/2. These examples of gain functions are bounded, infinitely often differentiable with bounded derivatives. However, throughout the paper, we only make the standing assumption that

(H1) the gain function f is globally Lipschitz continuous on .

We may transfer Eq. (2) into the Hilbert space setting of infinite-dimensional stochastic evolution equations [23, 56] for the Hilbert space L 2 (B). Subsequently, brackets , always denote the inner product on this Hilbert space. Moreover, we introduce the following notation. Firstly, F denotes the nonlinear Nemytzkii-operator defined from f, i.e., F(g)(x)=f(g(x)) for any function g L 2 (B). The condition (H1) implies that F: L 2 (B) L 2 (B) is a Lipschitz continuous operator. Often, spatially continuous solutions to (2) are also of interest, and thus we note that the Nemytzkii-operator also preserves its Lipschitz continuity on the Banach space C(B) with its norm g 0 = sup x B |g(x)| due to being bounded.Footnote 1 Secondly, the linear operator K is the integral operator defined by the kernel w

Kg(x)= B w(x,y)g(y)dyg L 2 (B).
(3)

Throughout the paper, we assume that

(H2) the kernel w is such that K is a compact, self-adjoint operator on L 2 (B).

We note that an integral operator is self-adjoint if and only if the kernel is symmetric, i.e., w(x,y)=w(y,x) for all x,yB. A sufficient condition for the compactness of K is, e.g., w L 2 ( B × B ) < in which case the operator is called a Hilbert–Schmidt operator. Since is bounded, the continuity of the kernel w on B×B implies the compactness of K considered an integral operator on C(B).

Then we rewrite Eq. (2) as an Hilbert space-valued stochastic evolution equation

d U t = [ α U t + K F ( U t ) ] dt+ϵd W t ,
(4)

where W is an L 2 (B)-valued stochastic process. Interpreting the original equation in this form, we now give a definition of the noise process assuming that

(H3) W is a Q-Wiener process on L 2 (B), where the covariance operator Q is a nonnegative, symmetric trace class operator on L 2 (B).

For a detailed explanation of a Hilbert space-valued Q-Wiener process and its covariance operator, we refer to, e.g., [23, 56]. As the operator Q is nonnegative, symmetric, and of trace class there exists an orthonormal basis of L 2 (B) consisting of eigenfunctions v i and corresponding non-negative real eigenvalues λ i 2 , which satisfy i = 1 λ i 2 < . It then holds that the Q-Wiener process W satisfies

W t = i = 1 λ i β t i v i ,
(5)

where β i are a sequence of independent scalar Wiener processes (cf. [[56], Proposition 2.1.10]). The series (5) converges in the mean-square on C([0,T], L 2 (B)). Furthermore, a straightforward adaptation of the proof of [[56], Proposition 2.1.10] shows that convergence in the mean-square also holds in the space C([0,T],C(B)) for every T>0 if v i C(B) for all i (corresponding to nonzero eigenvalues) and sup x B | i = 1 λ i 2 v i ( x ) 2 |<.

The existence and uniqueness of mild solutions to (4) with trace class noise for given initial condition U 0 L 2 (B) is guaranteed under the Lipschitz condition on f, cf. [23], and we can write the solution in its mild form

U t = e α t U 0 + 0 t e α ( t s ) KF( U s )ds+ 0 t e α ( t s ) d W s .
(6)

The solution possesses a modification in C([0,T], L 2 (B)) and from now on we always identify the solution (6) with its continuous modification. It is worthwhile to note that for cylindrical Wiener processes—and thus in particular space-time white noise—there does not exist a solution to (4). This contrasts with other well-studied infinite-dimensional stochastic evolution equations, e.g., the stochastic heat equation. Due to the representation of the solution (6), it follows that a solution can only be as spatially regular as the stochastic convolution 0 t e α ( t s ) d W s . In the present case, the semigroup generated by the linear operator is not smoothing in contrast to, e.g., the semigroup generated by the Laplacian in the heat equation. Thus, the stochastic convolution is only as smooth as the noise which for space-time white noise is not even a well-defined function. To be more specific, for cylindrical Wiener noise, the series representation of the stochastic convolution (cf. see Eq. (8) below) does not converge in a suitable probabilistic sense.

We next aim to strengthen the spatial regularity of the solution (6), which will be required later on. According to [[23], Theorem 7.10] the solution (6) is a continuous process taking values in the Banach space C(B) if the initial condition satisfies u 0 C(B), the linear part in the drift of (4) generates a strongly continuous semigroup on C(B), the nonlinear term KF is globally Lipschitz continuous on C(B), and finally, if the stochastic convolution is a continuous process taking values in C(B). It is easily seen that the first conditions are satisfied and sufficient conditions for the latter property are given in the following lemma.

Lemma 2.1 Assume that the orthonormal basis functions v i are Lipschitz continuous with Lipschitz constants L i such that

sup x B | i = 1 λ i 2 v i ( x ) 2 |<, sup x B | i = 1 λ i 2 L i 2 ρ | v i ( x ) | 2 ( 1 ρ ) |<
(7)

for a ρ(0,1). Then the process

O(x,t):= 0 t e α ( t s ) d W s (x)= i = 1 λ i 0 t e α ( t s ) d β s i v i (x)
(8)

possesses a modification with γ-Hölder continuous paths in R + ×B for all γ(0,ρ/2).

Proof We prove the lemma applying the Kolmogorov–Centsov theorem (cf. [[23], Theorem 3.3 and Theorem 3.4]). Throughout the proof, C is some finite constant, which may change from line to line, but is independent of x,yB and t,s0. We start showing that the process O is Hölder continuous in the mean-square in each direction. As v i are assumed continuous these are pointwise uniquely given and each O(t,x) is for fixed xB and t0 a Gaussian random variable due to i = 1 λ i 2 v i ( x ) 2 <. Hence, for all 0st and all x,yB, we obtain

E | O ( x , t ) O ( y , t ) | 2 = i = 1 λ i 2 0 t e 2 α ( t s ) d s | v i ( x ) v i ( y ) | 2 C sup z B | i = 1 λ i 2 L i 2 ρ | v i ( z ) | 2 ( 1 ρ ) | | x y | ρ

using

| v i ( x ) v i ( y ) | 2 L i 2 ρ | x y | ρ | x y | ρ ( | v i ( x ) | + | v i ( y ) | ) 2 ( 1 ρ )

for every ρ[0,1] and | x y | ρ diam ( B ) ρ . Next, for the temporal regularity we obtain

E | O ( x , t ) O ( x , s ) | 2 = i = 1 λ i 2 v i ( x ) 2 s t e 2 α ( t r ) d r + i = 1 λ i 2 v i ( x ) 2 0 s | e α ( t r ) e α ( s r ) | 2 d r = i = 1 λ i 2 v i ( x ) 2 ( 1 e α ( t s ) 2 α ) + i = 1 λ i 2 v i ( x ) 2 ( ( 1 e α ( t s ) ) 2 ( e α t e α s ) 2 2 α ) .

As the exponential function on the negative half-axis is Hölder continuous for every ρ[0,1], it holds

E | O ( x , t ) O ( x , s ) | 2 C ρ i = 1 λ i 2 v i ( x ) 2 | t s | ρ .

Thus, overall Jensen’s inequality yields E | O ( x , t ) O ( y , s ) | 2 C ρ ( | x y | 2 + | t s | 2 ) ρ / 2 . Since the difference O(x,t)O(y,s) is centered Gaussian, it further holds that

E | O ( x , t ) O ( y , s ) | 2 m C ρ , m ( | t s | 2 + | x y | 2 ) m ρ / 2 mN.

Now, the Kolmogorov–Centsov theorem implies the statement of the lemma. □

We present an example to illustrate the type of noise we are generally interested in. Further motivation is provided in Sect. 3.

Example 2.1 Consider the neural field equation on a d-dimensional cube B= [ 0 , 2 π ] d with noise based on trigonometric basis functions of L 2 ( [ 0 , 2 π ] d ). This type of noise is almost ubiquitous in applications as for the stochastic heat equations the basis functions can be chosen such that the usual (Dirichlet, Neumann or periodic) boundary conditions are preserved. For the example of noise preserving homogeneous Neumann boundary conditions, the basis functions are

v i (x)= k = 1 d e i k ( x k ),
(9)

where x=( x 1 ,, x d ), i=( i 1 ,, i k ) is a multi-index in N d and the functions e i k are given by

e i k ( x k )={ 1 2 π , i k = 0 , 1 π cos ( i k x k / 2 ) , i k 1 .

The functions v i are for all i N d pointwise bounded by π d / 2 and Lipschitz continuous with Lipschitz constants given by L i = π d / 2 |i| (cf. [[8], Lemma 5.3]). Next, we construct a trace class Wiener process from these basis functions. A particular important example of spatiotemporal noise is smooth noise with exponentially decaying spatial correlation length [15, 36, 43], i.e.,

E W t ( x ) W s ( y ) = min { t , s } 1 ( 2 ξ ) d exp ( π 4 | x y | 2 ξ 2 ) + correction on the boundary
(10)

for a parameter ξ>0 modeling the spatial correlation length. Note that for ξ0 this noise process approximates space-time white noise. Following [60], we can calculate under the assumption that ξ2π the coefficients λ i 2 such that the Q-Wiener process (5) possesses the correlation function (10) and obtain

λ i 2 =exp ( ξ 2 | i | 2 4 π ) .
(11)

Now, it is easy to see that for this choice of eigenvalues the noise is of trace class and moreover the additional conditions of Lemma 2.1 are satisfied: As the functions v i are bounded, we obtain

sup x B | i N d λ i 2 v i ( x ) 2 | π d + π d N = 1 i { 0 , , N } d { 0 , , N 1 } d exp ( ξ 2 | i | 2 4 π ) π d + π d N = 0 exp ( ξ 2 N 2 4 π ) 2 N 1 <

and the second condition of (7) is satisfied as

sup x B | i = N d λ i 2 L i 2 ρ | v i ( x ) | 2 ( 1 ρ ) | π d N = 1 i { 0 , , N } d { 0 , , N 1 } d exp ( ξ 2 | i | 2 4 π ) | i | 2 ρ π d d N = 0 exp ( ξ 2 N 2 4 π ) N 2 ρ 2 N 1 < .

3 Gain Function Perturbation

Another motivation for the considered additive noise neural field equations stems from a (formal) perturbation of the gain function f with space-time white noise. Let W ˙ denote space time white noise and consider the randomly perturbed Amari equation

t U(x,t)=αU(x,t)+ B w(x,y) ( f ( U ( y , t ) ) + ϵ W ˙ ( y , t ) ) dy.
(12)

Recall that, by assumption (H2), the integral operator K defined by the kernel w is a self-adjoint compact operator. Thus, the spectral theorem implies that K possess only real eigenvalues λ i , iN, and the corresponding eigenfunctions v i form an orthonormal basis of L 2 (B). If additionally we assume that

(H4) K is a Hilbert–Schmidt operator on L 2 (B), that is, w L 2 ( B × B ) <,

then the eigenvalues satisfy i = 1 λ i 2 <. Hence, K possesses the series representation

Kg= i = 1 λ i g, v i v i g L 2 (B)

which yields for the perturbed equation (12) the representation

t U(x,t)=αU(x,t)+ i = 1 λ i ( B f ( U ( y , t ) ) v i ( y ) d y + ϵ W ˙ ( t , ) , v i ) v i (x).

Next, note that the random variables β ˙ t i = W ˙ (,t), v i form a sequence of independent scalar white noise processes in time. Therefore, the perturbed equation becomes

t U(x,t)=αU(x,t)+ B w(x,y)f ( U ( y , t ) ) dy+ϵ i = 1 λ i β ˙ t i v i (x).

Rewriting this equation in the usual notation of stochastic differential equations we obtain

d U t (x)= [ α U t ( x ) + B w ( x , y ) f ( U t ( y ) ) d y ] dt+ϵd W t (x),
(13)

where

W t (x)= i = 1 λ i β t i v i (x)

is a trace-class Wiener process on the Hilbert space L 2 (B). Note, when comparing to (5) here the coefficients λ i may be negative, however, as β i is also a Wiener process this slight inconsistency can be neglected.

We next want to discuss spatial continuity of the solution to this equation with its particular noise structure. It is clear that this should translate into smoothing conditions of the kernel w. Due to Lemma 2.1, it is sufficient to establish conditions (7): First, it holds that

i = 1 λ i 2 v i ( x ) 2 = i = 1 ( B w ( x , y ) v i ( y ) d y ) 2 = i = 1 w ( x , ) , v i 2 = w ( x , ) L 2 ( B ) 2

due to Parseval’s identity. Hence, the first condition of (7) becomes

sup x B w ( x , ) L 2 ( B ) <.
(14)

Next, the basis functions are continuous if the kernel w(x,y) is continuous in x and as the minimal Lipschitz constant is given by the supremum on the derivatives we obtain

L i = sup x B | 1 λ i x B w(x,y) v i (y)dy| 1 | λ i | sup x B x w ( x , ) L 2 ( B )

due to the Cauchy–Schwarz inequality. Therefore, the second condition in (7) is satisfied if

sup x B x w ( x , ) L 2 ( B ) < and i = 1 | λ i | 2 ( 1 ρ ) | v i ( x ) | 2 ( 1 ρ ) M x B
(15)

for a ρ(0,1) and a M<. The condition (14) and the first part of (15) are easily checked but for the second part of (15) usually theoretical results on the speed of decay of the eigenvalues have to be obtained. We note that (15) is certainly satisfied with ρ=1/2 if K is a trace class operator and the eigenfunctions are pointwise bounded independently of i; see, e.g., Example 2.1.

4 Deterministic Dynamics

The classical deterministic Amari model, obtained for ϵ=0 in (2), is

t U(x,t)=αU(x,t)+ B w(x,y)f ( U ( y , t ) ) dy.
(16)

where B R d . Note that we may allow to be unbounded for the deterministic case as solutions of (16) do exist in this case [55]. Suppose there exists a stationary solution U = U (x) of (16). To determine the stability of U consider U(x,t)= U (x)+ψ(x,t). Substituting into (16) and Taylor-expanding around U yields the linearized problem

t ψ(x,t)=αψ(x,t)+ B w(x,y)(Df) ( U ( y ) ) ψ(y,t)dy.
(17)

Hence, the standard ansatz ψ(x,t)= ψ 0 (x) e μ t leads to the eigenvalue problem

( μ + α ) = : η ψ 0 ( x ) = B w ( x , y ) ( D f ) ( U ( y ) ) ψ 0 ( y ) d y : = ( L ψ 0 ) ( x ) or L ψ 0 = η ψ 0 .
(18)

The linear stability condition μ<0 is equivalent to η<α where ηspec(L). The stability analysis can be reduced to the understanding of the operator . However, this is a highly nontrivial problem as the behavior depends upon , U (x), w(x,y), and f(u).

An LDP and Kramers’ law are of particular interest in the case of bistability. Therefore, we point out that there are many situations where (16) does have three stationary solutions: U ± (x), which are stable and U 0 (x) which is unstable. The following three examples make this claim more precise.

Example 4.1 The first example is presented by Ermentrout and McLeod [29]. Let B=R, w(x,y)=w(|xy|), α=1 and assume that 0U(x,t)1. Furthermore, suppose that f C 1 ([0,1],R) with f >0 and

f ˜ (U):=U+f(U)
(19)

has precisely three zeros U=0,a,1 with 0<a<1. The additional conditions f (0)<1 and f (1)<1 guarantee stability of the stationary solutions U=0 and U=1. As an even more explicit assumption [[29], p. 463], one may consider a Dirac δ-distribution for w in (16), which yields

t U(x,t)=U(x,t)+F ( U ( x , t ) ) .
(20)

Suppose there are precisely three solutions for U=F(U) given by U=0,a,1 with 0<a<1. If F (0)<1, F (1)<1 and F (a)>1 then (20) has an unstable stationary solution between the two stable stationary solutions.

Example 4.2 An even more concrete example is given by Guo and Chow [39, 40]. They assume B=R, w(x,y)=w(xy), α=1 and fix two functions

f(u)= [ b ( u u b ) + 1 ] H(u u b ),w(x)=A e a | x | e | x |

where H() is the Heaviside function and b, a, A, and u b are parameters. Depending on parameter values, one may obtain three constant stationary solutions exhibiting bistability as expected from Example 4.1. However, there are also parameter values so that three stationary pulses exhibiting bistability exist.

Note that the choice B=R is not essential to obtain two deterministically-stable stationary states U ± (x) and one deterministically-unstable stationary state U 0 (x). The important aspect is that certain algebraic equations, such as U=f(U) and U=F(U) in Example 4.1, have the correct number of solutions. Furthermore, one has to make sure that the sign of the nonlinearity f is chosen correctly to obtain the desired deterministic stability results for the stationary solutions. Hence, we expect that a similar situation also holds for bounded domains; see also [63].

Examples 4.1–4.2 are typical for many similar cases with xR or x R 2 . Many results on existence and stability of stationary solutions are available; see, e.g., [1, 46, 51, 52], and references therein.

Example 4.3 As a higher-dimensional example, one may consider the work by Jin, Liang, and Peng [42] who assume that w(x,y)=w(xy), α=1, B= R d , and

Z = R d w(x)dx<,κ Z >1,

where κ is the Lipschitz constant of f C 1 ( R d ,R). Furthermore, suppose f is uniformly continuous and

f ( U ) Z < 1 for  U ( , U 1 ) ( U 2 , ) , f ( U ) Z = 1 for  U { U 1 , U 2 } , f ( U ) Z > 1 for  U ( U 1 , U 2 ) ,

for U 1 <0< U 2 . Then [[42], Proposition 11] the conditions

U 1 +f( U 1 ) Z <0and U 2 +f( U 2 ) Z >0

yield three stationary solutions U + , U and U 0 . The solutions U ± are stable and satisfy U 0 and U + >0. The solution U 0 is unstable.

Although we only focus on stationary solutions, it is important to remark that the techniques developed here could—in principle—also be applied to traveling waves U(x,t)=U(xst) for s>0. The existence and stability of traveling waves for (16) has been investigated for many different situations; see, e.g. [12, 14, 21, 29], and references therein. However, it seems reasonable to restrict ourselves here to the stationary case as even for this simpler case an LDP and Kramers’ law are not yet well understood.

5 Large Deviations and Kramers’ Law

Here, we briefly introduce the background and notation for LDPs and Kramers’ law needed through the remaining part of the paper; see [26, 34] for more details. Consider a topological space with Borel σ-algebra B X . A mapping I:X[0,] is called a good rate function if it is lower semicontinuous and the level set {h:I(h)α} is compact for each α[0,). Sometimes the term action functional is used instead of rate function. Consider a family { μ ϵ } of probability measures on (X, B X ). The measures { μ ϵ } satisfy an LDP with good rate function I if

inf Γ o I lim inf ϵ 0 ϵ 2 ln μ ϵ (Γ) lim sup ϵ 0 ϵ 2 ln μ ϵ (Γ) inf Γ ¯ I
(21)

holds for any measurable set ΓX; often infima over the interior Γ o and closure Γ ¯ coincide so that lim inf and lim sup coincide at a common limit. One of the most classical cases is the application of (21) to finite-dimensional SDEs

d u t =g( u t )dt+ϵG( u t )d β t
(22)

where u t R N , g: R N R N , G: R N R N × k , β t = ( β t 1 , , β t k ) T is a vector of k independent Brownian motions and we shall assume that the initial condition u 0 R N is deterministic. If we want to emphasize that u t depends on ϵ, we shall also use the notation u t ϵ . The topological space is chosen as a path space

X:= C 0 ( [ 0 , T ] , R N ) = { ϕ C ( [ 0 , T ] , R N ) : ϕ ( 0 ) = u 0 } .

To state the next result, we also need the Sobolev space

H 1 N := { ϕ : [ 0 , T ] R N : ϕ  absolutely continuous,  ϕ L 2 , ϕ ( 0 ) = 0 } .
(23)

Furthermore, we are going to assume that the diffusion matrix D(u):=G ( u ) T G(u) R N × N is positive definite.

Theorem 5.1 ([26, 34])

The SDE (22) satisfies the LDP (21) given by

inf Γ o I lim inf ϵ 0 ϵ 2 ln P ( ( u t ϵ ) t [ 0 , T ] Γ ) lim sup ϵ 0 ϵ 2 ln P ( ( u t ϵ ) t [ 0 , T ] Γ ) inf Γ ¯ I
(24)

for any measurable set of paths ΓX with good rate function

I ( ϕ ) = I [ 0 , T ] ( ϕ ) = { 1 2 0 T ( ϕ t g ( ϕ t ) ) T D ( ϕ t ) 1 ( ϕ t g ( ϕ t ) ) d t , if  ϕ u 0 + H 1 N , + , otherwise.
(25)

An important application of the LDP (24) is the so-called first-exit problem. Suppose that u t starts near a stable equilibrium u D R N of the deterministic system given by setting ϵ=0 in (22), where is a bounded domain with smooth boundary. Define the first-exit time

τ D ϵ :=inf { t > 0 : u t ϵ D } .
(26)

To formalize the application of the LDP, define the mapping

Z(u,v;s):=inf { I ( ϕ ) : ϕ C ( [ 0 , s ] , R N ) , ϕ 0 = u , ϕ s = v }
(27)

which is the cost for a path starting at u to reach v in time s. Next, assume that D ¯ is properly contained inside the (deterministic) basin of attraction of u . Then one can show [[34], Theorem 4.1, p. 124] that

lim ϵ 0 ϵ 2 lnP ( τ D ϵ t | u = u 0 ) =inf { Z ( u , v ; s ) : s [ 0 , t ] , v D } .
(28)

To get more precise information on the exit distribution, one defines the function

Z ( u , v ) = inf t > 0 Z ( u , v ; t )

which is called the quasipotential for u . It is natural to minimize the quasipotential over D and define

Z ¯ := inf v D Z ( u , v ) .

Theorem 5.2 ([[34], Theorem 4.2, p. 127], [[26], Theorem 5.7.11])

For all initial conditions uD and all δ>0, the following two limits hold:

lim ϵ 0 P ( e ( Z ¯ δ ) / ϵ 2 < τ D ϵ < e ( Z ¯ + δ ) / ϵ 2 | u 0 = u ) =1,
(29)
lim ϵ 0 ϵ 2 lnE [ τ D ϵ | u 0 = u ] = Z ¯ .
(30)

If the SDE (22) has a gradient structure with identity diffusion matrix, i.e.,

g(u)=V(u)for V: R N R,andG(u)=Id R N × N
(31)

then one can show [[34], Sect. 4.3] that the quasipotential is given by Z( u ,v)=2(V(v)V( u )). If the potential has precisely two local minima u ± and a saddle point u s with N1 stable directions so that the Hessian 2 V( u s ) has eigenvalues

ρ 1 ( u s ) <0< ρ 2 ( u s ) << ρ N ( u s )

then one can even refine Theorem 5.2. Suppose u 0 = u then the mean first passage time to u + satisfies

E [ inf { t > 0 : u t u + 2 δ } ] 2 π | ρ 1 ( u s ) | | det ( 2 V ( u s ) ) | det ( 2 V ( u ) ) e 2 ( V ( u s ) V ( u ) ) / ϵ 2
(32)

where 2 denotes the usual Euclidean norm in R N . The formula (32) is also known as Kramers’ law [5] or Arrhenius–Eyring–Kramers’ law [2, 31, 45]. Note that the key differences with the general LDP (29) for the first-exit problem are that (32) yields a precise prefactor for the exponential transition time and uses the explicit form of the good rate function for gradient systems. It is interesting to note that a rigorous proof of (32) has only been obtained quite recently [9, 10].

6 Gradient Structures in Infinite Dimensions

The finite-dimensional Kramers’ formula (32) applies to SDEs (22) with a gradient-structure g(u)=V(u) where V: R N R is the potential. A generalization of Kramers’ law has been carried over to the infinite-dimensional case of SPDEs given by

dU= [ Δ U h ( U ) ] dt+ϵdW(x,t)
(33)

for U=U(x,t), x B ˜ R, B ˜ a bounded interval, h C k (R,R) for suitably large kN and W(x,t) denotes space-time white noise and either Dirichlet or Neumann boundary conditions are used [4, 6, 7]. A crucial reason why this generalization works is that the SPDE (33) has a gradient-type structure [32] given by the energy functional

V[U]:= B ˜ [ 1 2 U ( x ) 2 + h ( U ( x ) ) ] dx.
(34)

More precisely, when ϵ=0 one obtains from (33) a PDE, say with Dirichlet boundary conditions,

dU= [ Δ U h ( U ) ] dt,U(x)=0on  B ˜
(35)

for a given sufficiently smooth initial condition U(x,0)= U 0 (x) C k (R,R). Standard parabolic regularity [[30], Sect. 7.1] implies that solutions U of (35) lie in the Sobolev spaces H 0 k ( B ˜ ). Computing the Gâteaux derivative in this space yields

z V[U]= B ˜ [ U ( x ) + h ( U ( x ) ) ] z(x)dx.
(36)

The Gâteaux derivative is equal to the Fréchet derivative V=DV by a standard continuity result [[25], p. 47]. Hence, (36) shows that the stationary solutions of (35) are critical points of the gradient functional V. Since the gradient structure of the deterministic PDE (35) is a key structure to obtain a Kramers’-type estimate for the SPDE (33), we would like to check whether there is an analogue available for the deterministic Amari model (16).

We shall assume for simplicity that f BC 1 (R) for the calculations in this section. Although this is a slightly stronger assumption than (H1), we shall see below that even with this assumption we are not able to obtain an immediate generalization of (36). Using a direct modification of the results in [55], it follows that the deterministic Amari model (16) has solutions U(x,t) in the Hölder space BC α (B)× BC α ([0,T]) for α(0,1] and B R d is the usual domain we use for the Amari model. Now consider the analogous naive guess to (36) given by

V[U]:= B [ α 2 U ( x ) 2 B 0 U ( y ) f ( r ) w ( x , y ) d r d y ] dx.
(37)

Computing the derivative in BC α (B) yields

z V [ U ] = lim δ 0 1 δ ( V [ U + δ z ] V [ U ] ) , = B [ α U ( x ) z ( x ) B f ( U ( y ) ) w ( x , y ) z ( y ) d y ] d x .
(38)

Therefore, setting z V[U]=0 is not equivalent to the solution of the stationary problem

αU(x)+ B w(x,y)f ( U ( y ) ) dy=0.

Due to the presence of the different terms z(x) and z(y) in (38), one may guess that the modified functional

V[U]:= B [ α 2 U ( x ) 2 1 2 B f ( U ( y ) ) f ( U ( x ) ) w ( x , y ) d y ] dx
(39)

could work. However, another direct computation shows that

z V [ U ] = B [ α U ( x ) z ( x ) d x ] 1 2 [ B B f ( U ( x ) ) D f ( U ( y ) ) w ( x , y ) z ( y ) d y d x ] 1 2 [ B B f ( U ( y ) ) D f ( U ( x ) ) w ( x , y ) z ( x ) d y d x ] = α U , z K F ( U ) , D f ( U ) z .

Hence, f and its derivative Df both appear instead of the desired formulation; by a similar computation one can show that replacing f(u()) in (39) by 0 u f(r)dr fails as well. Hence, there does not seem to be a natural generalization for the guess for the gradient functional (34). However, one has to consider possible coordinate changes. The idea to apply a preliminary transformation has been discussed, e.g., in [[28], p. 2] and [[51], p. 488]. Assume that

f 1 =:g existsand g 0.
(40)

Define P(x,t):=f(U(x,t)) as the mean action-potential generating rate so that U=g(P). Observe that

t P(y,t)= 1 g ( P ( x , t ) ) [ α g ( P ( x , t ) ) + B w ( x , y ) P ( y , t ) d y ] .
(41)

For this equation, the problem observed in (39) should disappear as the integral only contains linear terms. One may define an energy-type functional

E[P]:= B [ 0 P ( x ) α g ( r ) d r 1 2 B w ( x , y ) P ( y ) P ( x ) d y ] dx.

Calculating the derivative yields

Q E [ P ] = lim δ 0 1 δ ( E [ P + δ Q ] E [ P ] ) = B α g ( P ( x ) ) Q ( x ) d x 1 2 B B w ( x , y ) { P ( y ) Q ( x ) + P ( x ) Q ( y ) } d y d x = α g ( P ) , Q B w ( x , y ) P ( y ) d y , Q .

This shows that there is hidden energy-type flow structure in the Amari model for the assumptions (40) so that

t P(x,t)= 1 g ( P ( x , t ) ) E [ P ( x , t ) ] .
(42)

However, even with this variable transformation, there seems to be little hope to derive a precise Kramers’ rule for the stochastic Amari model (2) by generalizing the approach for SPDE systems [4, 6, 7]. The problems are as follows:

  • There is still a space-time dependent nonlinear prefactor 1/ g (P(x,t)) in (42) for the deterministic system, so the system is not an exact gradient flow for a potential.

  • Applying the change-of-variable P t (x):=f( U t (x)) for the stochastic Amari model (2) requires an Itô-type formula so that

    d P t ( x ) = 1 g ( P t ( x ) ) [ α g ( P t ( x ) ) + B w ( x , y ) P t ( y ) d y + O ( ϵ 2 ) ] d t + ϵ M ( P t ( x ) ) d W t ( x ) ,
    (43)

where M( P t (x)) is now a multiplicative noise term; see [24], and references therein for more details on infinite-dimensional Itô-type formulas. The higher-order term O( ϵ 2 ) in the drift part of (43) is not expected to cause difficulties but a multiplicative noise structure definitely excludes the direct application of Kramers’ law.

  • Even if we would just assume—without any immediate physical motivation—that the noise term in (43) is purely additive ϵd W t (x), there is a problem to apply Kramers’ law since we do not have a structure like in (22) with G()=Id as W t (x) is a Q-Wiener process defined in (5) and driving space-time white noise in (4) is particularly excluded due to the nonexistence of a solution.

Based on these observations, an immediate approach to generalize a sharp Kramers’ formula to neural fields seems unlikely. Hence we try to understand an LDP for the stochastic Amari-type model (2).

7 Direct Approach to an LDP

A general direct approach for the derivation of an LDP for infinite-dimensional stochastic evolution equations is presented in [23] and further results have been obtained for certain additional classes of SPDEs [1719, 57]. The results in [23] are valid for semilinear equations with suitable Lipschitz assumptions on the nonlinearity and with solutions taking values in C(D). We state the available results applied to continuous solutions of the Amari equation (4) assuming that the conditions of Lemma 2.1 are satisfied.

For the following, we assume that there exists an open neighborhood DC(B) containing a stable equilibrium state u of the deterministic Amari equation (16) such that D ¯ is contained in the basin of attraction of u . We are interested in the rate function and the first-exit time of the process from given by

τ D ϵ =inf{t0:UD}

if U starts in the deterministic equilibrium state u . In order to state the quasipotential for u, we consider the control system

y ˙ =αy+KF(y)+ Q 1 / 2 v, y 0 =xC(B)
(44)

for controls v L 2 ((0,T), L 2 (B)) for all T>0 and denote by y x , v its unique mild solutionFootnote 2 taking values in C([0,T],C(B)) for all T>0. Then we define

I ( u , z ) =inf { 1 2 0 T v ( s ) L 2 2 d s : y u , v ( T ) = z , T > 0 } ,
(45)

where this quasipotential relates to the minimal energy necessary to move the control system (44) started at the equilibrium state u to z.

Theorem 7.1 ([[23], Theorem 12.18])

It holds that

lim ϵ 0 ϵ 2 lnE [ τ D ϵ | U 0 = u ] = inf z D I ( u , z ) .

Following further the exposition in [[23], Sect. 12] explicit formulae for the rate function I are only available in the special case of the drift possessing gradient structure and space-time white noise. As we have argued above, this structure is particularly not satisfied for neural field equations. Hence, the same observations as presented at the end of the last section prevent a further direct analytic approach to the LDP. Therefore, we try to understand the LDP problem for a discretized approximate finite-dimensional version of the neural field equation.

8 Galerkin Approximation

Throughout the section, we assume that the assumptions (H1)–(H3) are satisfied. As a discretized version of the neural field equation (2), we consider its spectral Galerkin approximations; recall that the solution U t of (2) lies in C([0,T], L 2 (B)) as discussed in Sect. 2. In order to decouple the noise, we define the spectral representation of the solution

U t (x)= i = 1 u t i v i (x).
(46)

Here, the orthonormal basis functions v i are given by the eigenfunctions of the covariance operator of the noise with corresponding eigenvalues λ i 2 , see Eq. (5). To obtain a equation for the coefficients u t i , we take the inner product of Eq. (4) with the basis functions v i , which yields

d U t , v i = [ α U t , v i + K F ( U t ) , v i ] dt+ϵd W t , v i for iN.

After plugging in (46), we obtain for u i the countable Galerkin system

d u t i = [ α u t i + ( K F ) i ( u t 1 , u t 2 , ) ] dt+ϵ λ i d β t i for iN.
(47)

Here, the nonlinearities coupling all the equations are given by

( K F ) i ( u t 1 , u t 2 , ) : = B v i ( x ) ( B w ( x , y ) f ( j = 1 u t j v j ( y ) ) d y ) d x = B f ( j = 1 u t j v j ( x ) ) ( B w ( x , y ) v i ( y ) d y ) d x

due to the symmetry of the kernel w. If, in addition, we assume that (H4) holds and K and Q possess the same eigenfunctions and the eigenvalues are related as discussed in Sect. 3 the nonlinearities become

( K F ) i ( u t 1 , u t 2 , ) = λ i B f ( j = 1 u t j v j ( x ) ) v i (x)dx.
(48)

The N th Galerkin approximation U N to U is obtained truncating the spectral representation (46), and thus given by

U t N = i = 1 N u t i , N v i ,
(49)

where u t i , N are the solutions to the N-dimensional Galerkin SDE system

d u t i , N = [ α u t i , N + ( K F ) i , N ( u t 1 , N , , u t N , N ) ] dt+ϵ λ i d β t i i=1,,N,
(50)

where the nonlinearities K F i , N are given by

( K F ) i , N ( u 1 , N , , u N , N ) = B f ( j = 1 N u j , N v j ( x ) ) ( B w ( x , y ) v i ( y ) d y ) dx
(51)

or, in the special case of Sect. 3, by

( K F ) i , N ( u 1 , N , , u N , N ) = λ i B f ( j = 1 N u j , N v j ( x ) ) v i (x)dx,
(52)

respectively. The following theorem establishes the almost sure convergence of the Galerkin approximations to the solution of (4). Therefore, we may be able to infer properties of the behavior of paths of the solution from the path behavior of the Galerkin approximations. We have deferred the proof of the theorem to the Appendix.

Theorem 8.1 It holds for all T>0 that

lim N sup t [ 0 , T ] U t U t N L 2 ( B ) =0a.s.

If, in addition, the series i = 1 λ i 2 v i 2 converges in C(B) and the functions v i are Lipschitz continuous with Lipschitz constants L i such that sup x B i = 1 λ i 2 L i 2 ρ | v i ( x ) | 2 ( 1 ρ ) < for a ρ(0,1) (i.e., the conditions of Lemma  2.1 are satisfied), U 0 C(B) such that lim N U 0 P N U 0 0 =0 and K is compact on C(B), then it holds for all T>0 that

lim N sup t [ 0 , T ] U t U t N 0 =0a.s.

9 Approximating the LDP

The LDP in Theorem 7.1 is not immediately computable. Here, we show that a finite-dimensional approximation can be made and what the structure of this approximation entails. For simplicity, consider the case when the diagonal diffusion matrix with entries D i i = λ i 2 is positive definite, i.e., λ i 0 for all iN. Observe that the inverse of induces an inner product on R N for NN{} via

a , b N := a T ( D ) 1 b= [ D 1 / 2 a ] T [ D 1 / 2 b ] for a,b R N ,

where is understood as the projection onto R N × N if N<. We are also going to use the notation introduced in Sect. 8 for the Galerkin approximation, i.e., u t , N denotes the vector

( u t 1 , N , u t 2 , N , , u t N , N ) T R N
(53)

where u t , N denotes the solutions of the N-dimensional system (50). Note that throughout this section we shall always work with the Galerkin coefficients, e.g., u t refers to the vector

( u t 1 , u t 2 , ) T R .

Furthermore, for arbitrary functions ϕ t L 2 (B), which are used in the formulation of the rate function, we use the notation ϕ t , N to denote the projection onto the first N Galerkin coefficients. Theorem 5.1 immediately implies the following:

Proposition 9.1 For the finite-dimensional Galerkin system (50) the rate function is given by

I N ( ϕ , N ) ={ 1 2 0 T ( ϕ t , N ) g , N ( ϕ t , N ) , ( ϕ t , N ) g , N ( ϕ t , N ) N d t , + , if  ϕ u 0 , N + H 1 N , + , otherwise ,
(54)

where g i , N ( ϕ t , N )=α ϕ t i , N + ( K F ) i , N ( ϕ t 1 , N ,, ϕ t N , N ).

Recall from Sect. 7 that Theorem 7.1 provides a large deviation principle. For the case when Q is a positive operator, we may formally rewrite the control system (44) as

D 1 / 2 [ y ˙ ( α y + K F ( y ) ) ] =v
(55)

so that the rate function for the Amari model can be expressed as

I(ϕ)={ 1 2 0 T B D 1 / 2 [ ϕ t g ( ϕ t ) ] D 1 / 2 [ ϕ t g ( ϕ t ) ] d x d t , + , if  ϕ u 0 + H 1 , + , otherwise,
(56)

where g( ϕ t )=α ϕ t +KF( ϕ t ) and D 1 / 2 u= i = 1 ( D 1 / 2 u, v i ) v i . Therefore, the next result just implies that the Galerkin approximation is consistent for the LDP.

Proposition 9.2 For each ϕ t u 0 + H 1 we have lim N |I( ϕ t ) I N ( ϕ t , N )|=0.

Proof Considering the finite-dimensional rate function (54) it suffices to notice that

( ϕ t , N ) g , N ( ϕ t , N ) , ( ϕ t , N ) g , N ( ϕ t , N ) N = i = 1 N 1 λ i 2 [ ( ϕ t i , N ) g i , N ( ϕ t , N ) ] 2 = i = 1 N B 1 λ i 2 [ ( ϕ t i , N v i ( x ) ) g i , N ( ϕ t , N ) v i ( x ) ] 2 d x

by orthonormality of the basis in L 2 (B). □

Hence, we may work with the finite-dimensional Galerkin system and its LDP for computational purposes. However, the truncation N may still be very large. We are going to show, using a formal analysis for a certain case, that there is an intrinsic multiscale structure of the rate function. We assume that we are in the special case considered in Sect. 3 where K and Q have the same eigenfunctions and the corresponding eigenvalues are given by λ i and λ i 2 , respectively.

Lemma 9.1 For each NN, the first part of the rate function (54) can be rewritten as

I N ( ϕ , N ) = 1 2 0 T a 1 2 a 2 + a 3 dt
(57)

where the three terms are given by

a 1 N = ( ϕ t , N ) + α ϕ t , N , ( ϕ t , N ) + α ϕ t , N N , a 2 N = ( ϕ t , N ) + α ϕ t , N , K F , N ( ϕ t , N ) N , a 3 N = [ K F ˜ , N ( ϕ , N ) ] T [ K F ˜ , N ( ϕ , N ) ]

and ( K F ˜ ) i , N = 1 λ i 2 ( K F ) i , N .

Proof For notational simplicity, we shall temporarily omit in this proof the subscript for the inner product , N =, as well as the Galerkin index, e.g., ϕ t , N = ϕ t as it is understood that we work with N-dimensional vectors in this proof. Consider the following general calculation:

ϕ t g ( ϕ t ) , ϕ t g ( ϕ t ) = ϕ t , ϕ t 2 ϕ t , g ( ϕ t ) + g ( ϕ t ) , g ( ϕ t ) = ϕ t , ϕ t + 2 α ϕ t , ϕ t 2 ϕ t , K F ( ϕ t ) + K F ( ϕ t ) , K F ( ϕ t ) + α 2 ϕ t , ϕ t 2 α ϕ t , K F ( ϕ t ) = ϕ t + α ϕ t , ϕ t + α ϕ t 2 ϕ t + α ϕ t , K F ( ϕ t ) + K F ˜ ( ϕ t ) T K F ˜ ( ϕ t )

and observe that the result is independent of N. □

It is important to point out that the LDP from Theorem 5.1 requires the infimum of the rate function. From Lemma 9.1, we know that the rate function splits into three terms. The three terms are interesting in the asymptotic limit N. Suppose

( ϕ t , N ) +α ϕ t , N =O ( κ ( N ) ) and K F ˜ , N ( ϕ t , N ) =O ( η ( N ) )

as N for some nonnegative functions κ, η. Then Lemma 9.1 yields

a 1 N =O ( κ ( N ) 2 λ N 2 ) , a 2 N =O ( κ ( N ) η ( N ) λ N 1 ) , a 3 N =O ( η ( N ) 2 ) .

Lemma 9.2 Suppose there exists a positive constant K f such that

sup x R | f ( x ) | K f
(58)

then η(N)=1.

Proof A direct estimate yields

| K F ˜ j , N ( ϕ t , N ) | B |f ( i = 1 N ϕ t i , N v i ( x ) ) | | v j ( x ) | dx K f B | v j ( x ) | dx.

Since v j L 2 ( B ) =1 and L 2 (B) L 1 (B), the last integral is uniformly bounded over jN by meas ( B ) 1 / 2 . □

We remark that several typical functions f discussed in Sect. 2 such as f(u)= ( 1 + e u ) 1 and f(u)=(tanh(u)+1)/2 are globally bounded so that Lemma 9.2 does apply to many practical cases. In this situation, we get that

a 1 N =O ( κ ( N ) 2 λ N 2 ) , a 2 N =O ( κ ( N ) λ N 1 ) , a 3 N =O(1).

We make a case distinction between the different relative asymptotics of κ(N) and λ N . Note that the following asymptotic relations are purely formal:

  • If κ(N) λ N or κ(N) λ N as N, then we can conclude that κ(N)0, i.e.,

    ( ϕ t N , N ) +α ϕ t N , N 0as N0
    (59)

since for trace-class noise we know that λ N 0. If we formally require that ( ϕ t N , N ) +α ϕ t N , N =0 for N sufficiently large, then the higher-order Galerkin modes decays exponentially in time

ϕ t N , N = ϕ 0 N , N e α t .
  • If κ(N) λ N as N, then a 1 2 a 2 + a 3 and the first term dominates the asymptotics. But a 1 N 0 for all N so that the rate function only has a finite infimum if a 1 N 0 as N. This implies again that (59) holds for the case of a finite infimum.

Hence, we get in many reasonable first-exit problems for the Amari model with trace-class noise that there is a finite set for nN of “slow” or “center-like” directions and an infinite set of “fast” or “stable” directions for n>N. Although we have made this observation from the rate function alone, it is entirely natural considering the structure of the Galerkin approximation. Indeed, for the case when the eigenvalues of K and Q are related, we may write (50) as

d u t i , N = ( α u t i , N + λ i [ ] ) dt+ϵ λ i d β t i
(60)

so that for bounded nonlinearity f, which is represented in the terms [] in (60), the higher-order modes should really just be governed by d u t i , N =α u t i , N dt.

Hence, Propositions 9.1–9.2 and the multiscale nature of the problem induced by the trace-class noise suggests a procedure how to approximate the rate function and the associated LDP in practice. In particular, we may compute the eigenvalues and eigenfunctions of K and Q up to a sufficiently large given order N . This yields an explicit representation of the Galerkin system and the associated rate function. Then one may apply any finite-dimensional technique to understand the rate function. One may even find a better truncation order N< N based on the knowledge that the minimizer of the rate function must have components that decay (almost) exponentially in time for orders bigger than N.

10 Outlook

In this paper, we have discussed several steps toward a better understanding of noise-induced transitions in continuum neural fields. Although we have provided the main basic elements via the LDP and finite-dimensional approximations, there are still several very interesting open problems.

We have demonstrated that a sharp Kramers’ rate calculation for neural fields with trace-class noise is very challenging as the techniques for white-noise gradient-structure SPDEs cannot be applied directly. However, we have seen in Sect. 4 that the deterministic dynamics for neural fields frequently exhibits a classical bistable structure with a saddle-state between stable equilibria. This suggests that there should be a Kramers’ law with exponential scaling in the noise intensity as well as a precisely computable pre-factor. It is interesting to ask how this pre-factor depends on the eigenvalues of the trace-class operator Q defining the Q-Wiener process. We expect that new technical tools are needed to answer this question.

From the viewpoint of experimental data, the exponential scaling for the LDP is relevant as it shows that noise-induced transitions have exponential interarrival times. This leads to the possibility that working memory as well as perceptual bistability could be governed by a Poisson process. However, the same phenomena could also be governed by a slowly varying variable, i.e., by an adaptive neural field [14]; the “fast” activity variable U in the Amari model is augmented by one or more “slow” variables. In this context, the required assumptions on the equilibrium structure in Sect. 4 and the noise in Sect. 3 is not necessary to produce a bistable switch and the fast variable U can, e.g., just have a single deterministically unstable equilibrium and bistable, nonrandom switching between metastable states may occur. Of course, there is also the possibility that an intermediate regime between noise-induced and deterministic escape is relevant [53].

It is interesting to note that the same problem arises generically across many natural sciences in the study of critical transitions (or “tipping points”) [48, 59]. The question which escape mechanism from a metastable state matches the data is often discussed very controversially and we shall not aim to provide a discussion here. However, our main goal to make the LDP and its associated rate functional as explicit as possible should definitely help to simplify comparison between models and experiment. For example, a parameter study or data assimilation for the finite-dimensional Galerkin system considered in Theorem 8.1 and the associated rate function in Proposition 9.1 are often easier than working directly with the abstract solutions of the stochastic Amari model in C([0,T], L 2 (B)).

To study the parameter dependence is an interesting open question, which we aim to address in future work. In particular, the next step is to use the Galerkin approximations in Sect. 8 and the associated LDP in Sect. 9 for numerical purposes [49]. Recent work for SPDEs [8] suggests that a spectral method can also be efficient for stochastic neural fields. Results on numerical continuation and jump heights for SDEs [47] can also be immediately transferred to the spectral approximation, which would allow for studies of bifurcations and associated noise-induced phenomena.

One may also ask how far the technical assumptions we make in this paper can be weakened. It is not clear which parts of the global Lipschitz assumptions may be replaced by local assumptions or removed altogether. Similar remarks apply to the multiscale nature of the problem induced by the decay of the eigenvalues of Q. How far this observation can be exploited to derive more efficient analytical as well as numerical techniques remains to be investigated.

On a more abstract level, it would certainly be desirable to extend our basic framework to other topics that have been considered already for deterministic neural fields. A generalization to activity based models with nonlinearity f( B w(x,y)u(y)dy) seems possible. Furthermore, it may be highly desirable to go beyond stationary solutions and investigate noise-induced switching and transitions for traveling waves and patterns.

Appendix: Convergence of the Galerkin Approximation

Proof of Theorem 8.1 We fix a T>0. Throughout the proof an unspecified norm or operator norm , respectively, are either for the Hilbert space L 2 (B) or the Banach space C(B) and estimates using the unspecified notation are valid in both cases. Furthermore, C>0 denotes an arbitrary deterministic constant, which may change from line to line but depend only on T. We begin the proof obtaining an a priori growth bound on the solution of the Amari equation (4). Using the linear growth condition on F implied by its Lipschitz continuity, we obtain the estimate

U t e α t U 0 +C 0 t e α ( t s ) ( 1 + U s ) ds+ O t .

Due to Gronwall’s inequality, there exists a deterministic constant C such that it holds almost surely

sup t [ 0 , T ] U t C ( 1 + U 0 + sup t [ 0 , T ] O t ) e C T a.s.
(61)

Note that O is an Ornstein–Uhlenbeck process, and it thus holds

sup t [ 0 , T ] O t L 2 <

almost surely and under the assumptions of Lemma 2.1 in addition

sup t [ 0 , T ] O t 0 <

almost surely.

Let P N denote the projection operator from L 2 (B) to the subspace spanned by the first N basis functions. Then we find that in Hilbert space notation the N th Galerkin approximation satisfies

U t N = e α t P N U 0 + 0 t e α ( t s ) P N KF ( U t N ) ds+ϵ O t N .

Here, we use O N to be shorthand for the truncated stochastic convolution

O t N := i = 1 N λ i 0 t e α ( t s ) d β s i v i .
(62)

Hence, we obtain for the error of the Galerkin approximation

U t U t N = e α t ( U 0 P N U 0 ) + 0 t e α ( t s ) ( K F ( U t ) P N K F ( U t N ) ) d s + ϵ ( O t O t N ) .

Adding and subtracting the obvious terms yields for the norm the estimate

U t U t N e α t U 0 P N U 0 + P N K 0 t e α ( t s ) F ( U s ) F ( U s N ) d s + K P N K 0 t e α ( t s ) F ( U s ) d s + ϵ O t O t N ,

where P N K L 2 K L 2 and sup N N P N K 0 < as a consequence of [[3], Lemma 11.1.4] (cf. the application of this result below). Next, using the Lipschitz and linear growth conditions on F, applying Gronwall’s inequality, taking the supremum over all t[0,T] and estimating using the bound (61) yield

sup t [ 0 , T ] U t U t N C ( U 0 P N U 0 + K P N K ( 1 + U 0 + sup t [ 0 , T ] O t ) ) + C ( sup t [ 0 , T ] O t O t N ) .
(63)

It remains to show that the individual terms in the right-hand side converge to zero for N almost surely.

  • It clearly holds that U 0 P N U 0 L 2 0 and the convergence U 0 P N U 0 0 0 holds by assumption.

  • Next, as argued above (1+ U 0 + sup t [ 0 , T ] O t ) is a.s. finite and the compactness of the operator K implies K P N K0 for N, see [[3], Lemma 12.1.4].

  • Finally, the third error term sup t [ 0 , T ] O t O t N vanishes if the Galerkin approximations O N of the Ornstein–Uhlenbeck process O converge almost surely in the spaces C([0,T], L 2 (B)) and C([0,T],C(B)), respectively. This convergence is proven in Lemma A.1 below.

The proof is completed. □

The following lemma contains the convergence of the Galerkin approximation of the Ornstein–Uhlenbeck process necessary for proving Theorem 8.1.

Lemma A.1 There exists a sequence b N >0 with lim N b N =0 such that for all T>0 and all δ>0 there exists a random variable Z δ with E | Z δ | p < for all p1 such that

sup t [ 0 , T ] O t O t N L 2 Z δ b N 1 δ

almost surely. If, in addition, the series i = 1 λ i 2 v i 2 converges in C(B) and the functions v i are Lipschitz continuous with Lipschitz constants L i such that sup x B i = 1 λ i 2 L i 2 ρ | v i ( x ) | 2 ( 1 ρ ) < for a ρ(0,1), then it further holds that

sup t [ 0 , T ] O t O t N 0 Z δ b N 1 δ

almost surely.

Remark A.1 Assumptions on the speed of convergence of the series i = 1 λ i 2 and i = 1 λ i 2 v i 2 and sup x B i = 1 λ i 2 L i 2 ρ | v i ( x ) | 2 ( 1 ρ ) readily yield a rate of convergence for the Galerkin approximation due to the definition of the constants b N in the proof of the lemma.

Proof of Lemma A.1 As in the proof Theorem 8.1 the unspecified norm denotes either the norm in L 2 (B) or in C(B) and estimates are valid in both cases. We fix T>0, ρ(0,1) and a pN with p>2d/ρ. Throughout the proof C>0 denotes a constant that changes from line to line, but depends only on the fixed parameters T, p, ρ, α and the domain B R d .

Then we obtain for all N,MN with M<N using the factorization method (cf. [[23], Sect. 5.3]) similarly to the proof of [[8], Lemma 5.6] the estimate

( E sup t [ 0 , T ] O t N O t M p ) 1 / p C sup t [ 0 , T ] ( E Y t M , N p ) 1 / p ,

where Y t N , M is the process defined by

Y t M , N = i = M + 1 N λ i 0 t ( t s ) ρ / 2 e α ( t s ) d β s i v i .

In order to estimate the p th mean of the process Y M , N , we proceed separately for the two cases L 2 (B) and C(B).

The case of L 2 (B): Due to the orthogonality of the basis functions and employing Hölder’s inequality, one obtains

E Y t M , N L 2 p = E ( i = M + 1 N λ i 2 ( 0 t ( t s ) ρ / 2 e α ( t s ) d β s i ) 2 ) p / 2 = E ( i = M + 1 N λ i 2 ( p 2 ) / p ( λ i 2 / p 0 t ( t s ) ρ / 2 e α ( t s ) d β s i ) 2 ) p / 2 E ( ( i = M + 1 N λ i 2 ) ( p 2 ) / p ( i = M + 1 N ( λ i 2 / p 0 t ( t s ) ρ / 2 e α ( t s ) d β s i ) p ) 2 / p ) p / 2 ( i = M + 1 N λ i 2 ) ( p 2 ) / 2 i = M + 1 N E ( λ i 2 / p 0 t ( t s ) ρ / 2 e α ( t s ) d β s i ) p .

Next, as the stochastic integrals in the right-hand side are centered Gaussian random variables [[8], Lemma 5.2]Footnote 3 yields for all tT

E Y t M , N L 2 p C ( i = M + 1 N λ i 2 ) ( p 2 ) / p i = M + 1 N λ i 2 ( 0 t ( t s ) ρ e 2 α ( t s ) d s ) p / 2 C ( i = M + 1 N λ i 2 ) ( p 2 ) / p i = M + 1 N λ i 2 ( 0 T s ρ e 2 α s d s ) p / 2 C ( i = M + 1 N λ i 2 ) p / 2 .

Therefore, we obtain for all M,NN with M<N

( sup t [ 0 , T ] E Y t N , M L 2 p ) 1 / p C ( i = M + 1 N λ i 2 ) 1 / 2 C ( i = M + 1 λ i 2 ) 1 / 2 ,
(64)

where the final upper bound decreases to zero for M by assumption.

The case of C(B): In this case the estimates get a bit more involved. As ρ/2>d/p The continuous embedding of the Sobolev–Slobodeckij space W ρ / 2 , p (B) into C(B) (cf. [[58], Sect. 2.2.4 and 2.4.4]) and [[8], Lemma 5.2] yield the estimates

sup t [ 0 , T ] E Y t N , M 0 p C sup t [ 0 , T ] B B E | Y t M , N ( x ) Y t M , N ( y ) | p | x y | d + ρ p / 2 d x d y + C sup t [ 0 , T ] B E | Y M , N ( x ) | p d x C sup t [ 0 , T ] B B ( E | Y t M , N ( x ) Y t M , N ( y ) | 2 ) p / 2 | x y | d + ρ p / 2 d x d y + C sup t [ 0 , T ] B ( E | Y M , N ( x ) | 2 ) p / 2 d x .
(65)

We proceed estimating the two expectation terms in the right-hand side. Then we obtain for all M<N and all x,yB for the first term

E | Y t M , N ( x ) Y t M , N ( y ) | 2 = E | i = M + 1 N λ i 0 t ( t s ) ρ / 2 e α ( t s ) d β s i ( v i ( x ) v i ( y ) ) | 2 i = M + 1 N λ i 2 0 T s ρ e 2 α s d s | v i ( x ) v i ( y ) | 2 C i = M + 1 N λ i 2 L i 2 ρ | x y | 2 ρ
(66)

for any ρ(0,1) and for the second term

E | Y t M , N ( x ) | 2 i = M + 1 N λ i 2 0 t ( t s ) ρ e 2 α ( t s ) d s v i ( x ) 2 C i = M + 1 N λ i 2 v i ( x ) 2 .
(67)

Next applying the estimates (67) and (66) to the right-hand side of (65) yields, note that ρp/2d>0,

( sup t [ 0 , T ] E Y t N , M 0 p ) 1 / p C ( B B ( i = M + 1 N λ i 2 L i 2 ρ | x y | 2 ρ ) p / 2 | x y | d + ρ p / 2 d x d y + B ( i = M + 1 N λ i 2 v i ( x ) 2 ) p / 2 d x ) 1 / p C ( B B | x y | ρ p / 2 d d x d y ( i = M + 1 N λ i 2 L i 2 ρ ) p / 2 + ( sup x B | i = M + 1 N λ i 2 v i ( x ) 2 | ) p / 2 ) 1 / p C ( sup x B | i = M + 1 N λ i 2 v i ( x ) 2 | + i = M + 1 N λ i 2 L i 2 ρ ) 1 / 2

for any ρ(0,1). Due to the assumptions of the lemma the two summations in the right hand side converge for N, and thus we obtain for all M,NN with M<N the estimate

( sup t [ 0 , T ] E Y t N , M 0 p ) 1 / p C ( sup x B | i = M + 1 λ i 2 v i ( x ) 2 | + i = M + 1 λ i 2 L i 2 ρ ) 1 / 2 ,
(68)

where the right-hand side decreases to zero for M.

Overall, we infer from the estimates (64) and (68) that O N is a Cauchy-sequence in the two spaces C([0,T], L 2 (B)) and C([0,T],C(B)) with respect to convergence in the p th mean and the limit is given by the process O. Moreover, it holds that

( E sup t [ 0 , T ] O t O t N p ) 1 / p C b N NN,
(69)

where the constant C depends only on p but is independent of N and the sequence b N is independent of p and lim N b N =0. As we fixed pN sufficiently large at the beginning of the proof, the result (69) holds for all sufficiently large pN. Then, however, Jensen’s inequality implies that (69) holds for all p[1,). Proceeding as in the proof of [[44], Lemma 2.1] using the Chebyshev–Markov inequality and the Borel–Cantelli lemma, one obtains that there exists for all δ>0 a random variable Z δ with E | Z δ | p < for all p1 such that

sup t [ 0 , T ] O t O t N Z δ b N 1 δ almost surely.
(70)

The proof is completed. □

Notes

  1. We note that the boundedness assumption on the domain in this study is only necessary when dealing with results in the space C(B) as is the appropriate space for the LDP results. All other results in this paper which only deal with the space L 2 (B), e.g., existence of solutions and convergence of the Galerkin approximation, are also valid for unbounded spatial domains.

  2. The existence of such a solution is guaranteed by standard results on deterministic equations (cf. [[23], Sect. A.3]) as long as Q 1 / 2 maps L 2 (B) continuously into C(B). This is easily established. The unique square root Q 1 / 2 of Q is the Hilbert–Schmidt operator given by Q 1 / 2 g= i = 1 λ i v i ,g v i for all g L 2 (B) and in order to show that Q 1 / 2 gC(B) it remains to establish that the functions converge uniformly on . This holds as for all xB

    | i = N λ i v i , g v i ( x ) | | i = N λ i 2 v i ( x ) 2 | 1 / 2 | i = 1 v i , g 2 | 1 / 2 ( sup x B | i = 1 λ i 2 v i ( x ) 2 | ) 1 / 2 ( i = N v i , g 2 ) 1 / 2 .

    Hence, the upper bound, which is finite due to (7), is in independent of x and converges to zero for N. Moreover, we further find that g(t) L 2 ((0,T), L 2 (B)) implies Q 1 / 2 g(t) L 2 ((0,T),C(B)).

  3. For a centered Gaussian random variable Z, it holds E Z p p! ( E Z 2 ) p / 2 for all pN.

References

  1. Amari S: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 1977, 27: 77–87. 10.1007/BF00337259

    Article  MATH  MathSciNet  Google Scholar 

  2. Arrhenius S: Über die Reaktionsgeschwindigkeit bei der Inversion von Rohrzucker durch Säuren. Z Phys Chem 1889, 4: 226–248.

    Google Scholar 

  3. Atkinson K, Han W: Theoretical Numerical Analysis. 2nd edition. Springer, Berlin; 2005.

    Book  MATH  Google Scholar 

  4. arXiv: http://arxiv.org/abs/1201.4440 Barret F: Sharp asymptotics of metastable transition times for one-dimensional SPDEs. arXiv:1201.4440; 2012.

  5. arXiv: http://arxiv.org/abs/arXiv:1106.5799v1 Berglund N: Kramers’ law: validity, derivations and generalisations. arXiv:1106.5799v1; 2011.

  6. Berglund N, Gentz B: Anomalous behavior of the Kramers rate at bifurcations in classical field theories. J Phys A, Math Theor 2009., 42: Article ID 052001 Article ID 052001

    Google Scholar 

  7. arXiv: http://arxiv.org/abs/arXiv:1202.0990 Berglund N, Gentz B: Sharp estimates for metastable lifetimes in parabolic SPDEs: Kramers’ law and beyond. arXiv:1202.0990; 2012.

  8. Blömker D, Jentzen A: Galerkin approximations for the stochastic Burgers equation. SIAM J Numer Anal 2013. 10.1137/110845756

    Google Scholar 

  9. Bovier A, Eckhoff M, Gayrard V, Klein M: Metastability in reversible diffusion processes. I. Sharp asymptotics for capacities and exit times. J Eur Math Soc 2004, 6(4):399–424.

    Article  MATH  MathSciNet  Google Scholar 

  10. Bovier A, Gayrard V, Klein M: Metastability in reversible diffusion processes. II. Precise estimates for small eigenvalues. J Eur Math Soc 2005, 7: 69–99.

    Article  MATH  MathSciNet  Google Scholar 

  11. Brackley CA, Turner MS: Random fluctuations of the firing rate function in a continuum neural field model. Phys Rev E 2007., 75: Article ID 041913 Article ID 041913

    Google Scholar 

  12. Bressloff PC: Traveling fronts and wave propagation failure in an inhomogeneous neural network. Physica D 2001, 155: 83–100. 10.1016/S0167-2789(01)00266-4

    Article  MATH  MathSciNet  Google Scholar 

  13. Bressloff PC: Metastable states and quasicycles in a stochastic Wilson–Cowan model. Phys Rev E 2010., 82: Article ID 051903 Article ID 051903

    Google Scholar 

  14. Bressloff PC: Spatiotemporal dynamics of continuum neural fields. J Phys A, Math Theor 2012., 45: Article ID 033001 Article ID 033001

    Google Scholar 

  15. Bressloff PC, Webber MA: Front propagation in stochastic neural fields. SIAM J Appl Dyn Syst 2012, 11(2):708–740. 10.1137/110851031

    Article  MATH  MathSciNet  Google Scholar 

  16. Bressloff PC, Wilkerson J: Traveling pulses in a stochastic neural field model of direction selectivity. Front Comput Neurosci 2012., 6: Article ID 90 Article ID 90

    Google Scholar 

  17. Cardon-Weber C: Large deviations for a Burgers’-type SPDE. Stoch Process Appl 1999, 84(1):53–70. 10.1016/S0304-4149(99)00047-2

    Article  MATH  MathSciNet  Google Scholar 

  18. Cerrai S, Röckner M: Large deviations for invariant measures of general stochastic reaction-diffusion systems. C R Math Acad Sci 2003, 337: 597–602. 10.1016/j.crma.2003.09.015

    Article  MATH  Google Scholar 

  19. Cerrai S, Röckner M: Large deviations for stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term. Ann Probab 2004, 32: 1–40. 10.1214/aop/1078415827

    Article  MathSciNet  Google Scholar 

  20. Coombes S: Waves, bumps, and patterns in neural field theories. Biol Cybern 2005, 93: 91–108. 10.1007/s00422-005-0574-y

    Article  MATH  MathSciNet  Google Scholar 

  21. Coombes S, Owen MR: Evans functions for integral neural field equations with Heaviside firing rate function. SIAM J Appl Dyn Syst 2004, 4: 574–600.

    Article  MathSciNet  Google Scholar 

  22. Coombes S, Laing CR, Schmidt H, Svanstedt N, Wyller JA: Waves in random neural media. Discrete Contin Dyn Syst, Ser A 2012, 32: 2951–2970.

    Article  MATH  MathSciNet  Google Scholar 

  23. Da Prato G, Zabczyk J: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge; 1992.

    Book  MATH  Google Scholar 

  24. arXiv: http://arxiv.org/abs/arXiv:1009.3526v4 Da Prato G, Jentzen A, Röckner M: A mild Itô formula for SPDEs. arXiv:1009.3526v4; 2012.

  25. Deimling K: Nonlinear Functional Analysis. Dover, Mineola; 2010.

    MATH  Google Scholar 

  26. Dembo A, Zeitouni O Applications of Mathematics 38. In Large Deviations Techniques and Applications. Springer, Berlin; 1998.

    Chapter  Google Scholar 

  27. Destexhe A, Rudolph-Lilith M: Neuronal Noise. Springer, Berlin; 2012.

    Book  Google Scholar 

  28. Enculescu M, Bestehorn M: Liapunov functional for a delayed integro-differential equation model of a neural field. Europhys Lett 2007., 77: Article ID 68007 Article ID 68007

    Google Scholar 

  29. Ermentrout GB, McLeod JB: Existence and uniqueness of travelling waves for a neural network. Proc R Soc Edinb A 1993, 123(3):461–478. 10.1017/S030821050002583X

    Article  MATH  MathSciNet  Google Scholar 

  30. Evans LC: Partial Differential Equations. 2nd edition. Am Math Soc, Providence; 2010.

    MATH  Google Scholar 

  31. Eyring H: The activated complex in chemical reactions. J Chem Phys 1935, 3: 107–115. 10.1063/1.1749604

    Article  Google Scholar 

  32. Faris WG, Jona-Lasinio G: Large fluctuations for a nonlinear heat equation with noise. J Phys A, Math Gen 1982, 15: 3025–3055. 10.1088/0305-4470/15/10/011

    Article  MATH  MathSciNet  Google Scholar 

  33. arXiv: http://arxiv.org/abs/arXiv:1302.1029v3 Faugeras O, MacLaurin J: A large deviation principle for networks of rate neurons with correlated synaptic weights. arXiv:1302.1029v3; 2013.

  34. Freidlin MI, Wentzell AD: Random Perturbations of Dynamical Systems. Springer, Berlin; 1998.

    Book  MATH  Google Scholar 

  35. Funahashi S, Bruce CJ, Goldman-Rakic PS: Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex. J Neurophysiol 1989, 61(2):331–349.

    Google Scholar 

  36. García-Ojalvo J, Sancho MJ: Noise in Spatially Extended Systems. Springer, New York; 1999.

    Book  MATH  Google Scholar 

  37. Gerstner W, Kistler W: Spiking Neuron Models. Cambridge University Press, Cambridge; 2002.

    Book  MATH  Google Scholar 

  38. Ginzburg I, Sompolinsky H: Theory of correlations in stochastic neural networks. Phys Rev E 1994, 50: 3171–3191. 10.1103/PhysRevE.50.3171

    Article  Google Scholar 

  39. Guo Y, Chow CC: Existence and stability of standing pulses in neural networks: I. Existence. SIAM J Appl Dyn Syst 2005, 4(2):217–248. 10.1137/040609471

    Article  MATH  MathSciNet  Google Scholar 

  40. Guo Y, Chow CC: Existence and stability of standing pulses in neural networks: II. Stability. SIAM J Appl Dyn Syst 2005, 4(2):249–281. 10.1137/040609483

    Article  MATH  MathSciNet  Google Scholar 

  41. Hutt A, Longtin A, Schimansky-Geier L: Additive noise-induced Turing transitions in spatial systems with application to neural fields and the Swift–Hohenberg equation. Physica D 2008, 237: 755–773. 10.1016/j.physd.2007.10.013

    Article  MATH  MathSciNet  Google Scholar 

  42. Jin D, Liang D, Peng J: Existence and properties of stationary solution of dynamical neural field. Nonlinear Anal, Real World Appl 2011, 12: 2706–2716. 10.1016/j.nonrwa.2011.03.016

    Article  MATH  MathSciNet  Google Scholar 

  43. Kilpatrick ZP, Ermentrout B: Wandering bumps in stochastic neural fields. SIAM J Appl Dyn Syst 2013. 10.1137/120877106

    Google Scholar 

  44. Kloeden PE, Neuenkirch A: The pathwise convergence of approximation schemes for stochastic differential equations. LMS J Comput Math 2007, 10: 235–253.

    Article  MATH  MathSciNet  Google Scholar 

  45. Kramers HA: Brownian motion in a field of force and the diffusion model of chemical reactions. Physica 1940, 7(4):284–304. 10.1016/S0031-8914(40)90098-2

    Article  MATH  MathSciNet  Google Scholar 

  46. Kubota S, Hamaguchi K, Aihara K: Local excitation solutions in one-dimensional neural fields by external input stimuli. Neural Comput Appl 2009, 18: 591–602. 10.1007/s00521-009-0246-2

    Article  Google Scholar 

  47. Kuehn C: Deterministic continuation of stochastic metastable equilibria via Lyapunov equations and ellipsoids. SIAM J Sci Comput 2012, 34(3):A1635-A1658. 10.1137/110839874

    Article  MATH  MathSciNet  Google Scholar 

  48. Kuehn C: A mathematical framework for critical transitions: normal forms, variance and applications. J Nonlinear Sci 2013. 10.1007/s00332-012-9158-x

    Google Scholar 

  49. Kuehn C, Riedler MG: Spectral approximations for stochastic neural fields. In preparation; 2013. Kuehn C, Riedler MG: Spectral approximations for stochastic neural fields. In preparation; 2013.

  50. Laing C, Lord G (Eds): Stochastic Methods in Neuroscience. Oxford University Press, London; 2009.

    Google Scholar 

  51. Laing CR, Troy WC: PDE methods for nonlocal models. SIAM J Appl Dyn Syst 2003, 2(3):487–516. 10.1137/030600040

    Article  MATH  MathSciNet  Google Scholar 

  52. Laing CR, Troy WC, Gutkin B, Ermentrout B: Multiple bumps in a neuronal model of working memory. SIAM J Appl Math 2002, 63(1):62–97. 10.1137/S0036139901389495

    Article  MATH  MathSciNet  Google Scholar 

  53. Meisel C, Kuehn C: On spatial and temporal multilevel dynamics and scaling effects in epileptic seizures. PLoS ONE 2012., 7(2): Article ID e30371 Article ID e30371

  54. Moreno-Bote R, Rinzel J, Rubin N: Noise-induced alternations in an attractor network model of perceptual bistability. J Neurophysiol 2007, 98(3):1125–1139. 10.1152/jn.00116.2007

    Article  Google Scholar 

  55. Potthast R, Beim Graben P: Existence and properties of solutions for neural field equations. Math Methods Appl Sci 2010, 33(8):935–949.

    MATH  MathSciNet  Google Scholar 

  56. Prévôt C, Roeckner M: A Concise Course on Stochastic Partial Differential Equations. Springer, Berlin; 2007.

    MATH  Google Scholar 

  57. Röckner M, Wang F-Y, Wu L: Large deviations for stochastic generalized porous media equations. Stoch Process Appl 2006, 116: 1677–1689. 10.1016/j.spa.2006.05.007

    Article  MATH  Google Scholar 

  58. Runst T, Sickel W: Sobolev Spaces of Fractional Order, Nemytskij Operators, and Nonlinear Partial Differential Equations. de Gruyter, Berlin; 1996.

    Book  MATH  Google Scholar 

  59. Scheffer M, Bascompte J, Brock WA, Brovkhin V, Carpenter SR, Dakos V, Held H, van Nes EH, Rietkerk M, Sugihara G: Early-warning signals for critical transitions. Nature 2009, 461: 53–59. 10.1038/nature08227

    Article  Google Scholar 

  60. Shardlow T: Numerical simulation of stochastic PDEs for excitable media. J Comput Appl Math 2005, 175(2):429–446. 10.1016/j.cam.2004.06.020

    Article  MATH  MathSciNet  Google Scholar 

  61. Soula H, Chow CC: Stochastic dynamics of a finite-size spiking neural network. Neural Comput 2007, 19(12):3262–3292. 10.1162/neco.2007.19.12.3262

    Article  MATH  MathSciNet  Google Scholar 

  62. van Ee R: Dynamics of perceptual bi-stability for stereoscopic slant rivalry and a comparison with grating, house-face, and Necker cube rivalry. Vis Res 2005, 45: 29–40. 10.1016/j.visres.2004.07.039

    Article  Google Scholar 

  63. Veltz R, Faugeras O: Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dyn Syst 2010, 9(3):954–998. 10.1137/090773611

    Article  MATH  MathSciNet  Google Scholar 

  64. Wilson H, Cowan J: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol Cybern 1973, 13(2):55–80.

    MATH  Google Scholar 

Download references

Acknowledgements

CK would like to thank the European Commission (EC/REA) for support by a Marie-Curie International Reintegration Grant and the Austrian Academy of Sciences (ÖAW) for support via an APART fellowship. We also would like to thank two anonymous referees whose comments helped to improve the manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christian Kuehn.

Additional information

Competing Interests

The authors declare that they have no competing interests.

Authors’ Contributions

Both authors contributed equally to the paper.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kuehn, C., Riedler, M.G. Large Deviations for Nonlocal Stochastic Neural Fields. J. Math. Neurosc. 4, 1 (2014). https://doi.org/10.1186/2190-8567-4-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-4-1

Keywords