Skip to main content

Numerical Bifurcation Theory for High-Dimensional Neural Models

Abstract

Numerical bifurcation theory involves finding and then following certain types of solutions of differential equations as parameters are varied, and determining whether they undergo any bifurcations (qualitative changes in behaviour). The primary technique for doing this is numerical continuation, where the solution of interest satisfies a parametrised set of algebraic equations, and branches of solutions are followed as the parameter is varied. An effective way to do this is with pseudo-arclength continuation. We give an introduction to pseudo-arclength continuation and then demonstrate its use in investigating the behaviour of a number of models from the field of computational neuroscience. The models we consider are high dimensional, as they result from the discretisation of neural field models—nonlocal differential equations used to model macroscopic pattern formation in the cortex. We consider both stationary and moving patterns in one spatial dimension, and then translating patterns in two spatial dimensions. A variety of results from the literature are discussed, and a number of extensions of the technique are given.

1 Introduction

Many models that arise in computational neuroscience are of the form

u t =g(u;μ),
(1)

where u may be a finite-dimensional vector or a function (of space and time, for example), μ is a vector of parameters, and the derivative is with respect to time, t. An ambitious goal when studying a model of the form (1) is to completely understand the nature of all of its solutions, for all possible values of μ. Analytically solving (1) would give us this information, but for many functions g such a solution is impossible to find. Instead we must concentrate on finding qualitative information about the solutions of (1), and on how they change as the parameter μ is varied. Questions we would like to answer include (a) What do typical solutions do as t, i.e. what is the “steady state” behaviour of a neuron or neural system? (b) Are there “special” initial conditions or states that the system can be in for which the long time behaviour is different from that for a “typical” initial condition? (c) How do the answers to these questions depend on the values of μ? More particularly, can the dynamics of (1) change qualitatively if a parameter (for example, input current to a neuron) is changed? In order to try to answer these questions we often concentrate on certain types of solutions of (1). Examples include fixed points (i.e. values of u such that g(u;μ)=0), periodic orbits (i.e. solutions which are periodic in time), and (in spatially dependent systems) patterned states such as plane and spiral waves. Solutions like these can be stable (i.e. all initial conditions in some neighbourhood of these solutions are attracted to them) or unstable (some nearby initial conditions leave a neighbourhood of them).

For many systems of interest, finding solutions of the type just mentioned, and their stability, can only be done numerically using a computer. Even the simplest case of finding all fixed points can be non-trivial, as there may be many or even an infinite number of them. When u is high dimensional, for example when (1) arises as the result of discretising a partial differential equation such as the cable equation on a dendrite or axon [1, 2], finding the linearisation about a fixed point, as needed to determine stability, may also be computationally challenging.

In the simplest case, that of finding fixed points of (1) and their dependence on μ, there are two main techniques. The first is integration in time of (1) from a given initial condition until a steady state is reached. Then μ is changed slightly, the process is repeated, and so on. This is conceptually simple, and many accurate integration algorithms exist, but it has several disadvantages:

  • There may be a long transient before a steady state is reached, requiring long time simulations.

  • Only stable fixed points can be found using this method.

  • For fixed μ there may be multiple stable fixed points, and which one is found depends on the value of the initial condition, in a way that may not be obvious.

The second technique for finding fixed points of (1) and their dependence on μ, which is the subject of this review, involves solving the algebraic equation g(u;μ)=0 directly. The stability of a fixed point is then determined by the linearisation of g about it, involving partial derivatives of g, rather than by time integration of (1).

Numerical bifurcation theory involves (among other things) finding fixed points and periodic orbits of models such as (1) by solving a set of algebraic equations, determining the stability of these solutions, and following them as parameters are varied, on a computer. The field is well developed, with a number of books [36], book chapters [7], reports [8] and software packages available [914]. The point of this article is not to cover numerical bifurcation theory in general, but to demonstrate and review its use in the study of some models from the field of computational neuroscience, in particular those that are high dimensional, resulting from the discretisation of systems of nonlocal differential equations commonly employed as large-scale models of neural tissue, such as the Wilson–Cowan [15] and Amari [16] models. We start in Sect. 2 by explaining the pseudo-arclength continuation algorithm and show how it can be applied to a simple model. Section 3 considers both stationary and moving patterns in one spatial dimension, while Sect. 4 gives an example of a pattern in two spatial dimensions. We discuss a number of extensions in Sect. 5 and conclude in Sect. 6.

2 A Low-Dimensional Model

As a toy model we consider the differential equation

d u d t =g(u;μ) u 4 u+ μ 2 1;uR,μR,
(2)

and assume that we cannot solve it analytically. We see that for μ large and positive, or large and negative, there are no fixed points, as du/dt is always positive. However, for small enough μ there may be fixed points. A full qualitative description of the dynamics of (2) is shown in Fig. 1 and we see that for μ small enough there are two fixed points, one stable and one unstable. We can find the stable fixed point by numerically integrating (2) forwards for a long time. (Note that if we chose the initial condition to be too large, this would not work.) We can also find this fixed point using Newton’s method, starting sufficiently close to it. Newton’s method has the advantage that its convergence does not depend on the stability of the fixed point, and thus it could also be used to find the unstable fixed point, if we started sufficiently close to it. (Clearly, numerical integration on its own is of little use if we want to find unstable objects.) The basic idea of numerical continuation is to take a fixed point, at one value of the parameter, and follow this solution as a parameter is varied, eventually tracing out, for example, the closed curve in Fig. 1. We would also like to know the stability of points on this curve, and that can be calculated as the curve is traced out, or afterwards.

Fig. 1
figure 1

Dynamics of (2). The curve of stable fixed points is shown solid, while the unstable fixed points are shown dashed. Arrows show the direction of solutions for fixed μ

The method we use is pseudo-arclength continuation; other methods exist [4]. Given a point ( μ 0 , u 0 ) on the curve shown in Fig. 1 (i.e. for which g( u 0 ; μ 0 )=0) we want to find a nearby point ( μ 1 , u 1 ) which satisfies g( u 1 ; μ 1 )=0, i.e. also lies on the curve. We specify this point by saying that not only must it satisfy g( u 1 ; μ 1 )=0, but also that

( u 1 u 0 ) u ˙ 0 +( μ 1 μ 0 ) μ ˙ 0 Δs=0,
(3)

where Δs is the pseudo-arclength stepsize chosen and ( u ˙ 0 , μ ˙ 0 ) is the tangent vector to the curve at ( μ 0 , u 0 ), normalised to have length 1. The overdot indicates differentiation with respect to arclength, s. Geometrically, condition (3) states that ( μ 1 , u 1 ) lies on the line perpendicular to the vector ( u ˙ 0 , μ ˙ 0 ), which at its closest point is a distance Δs from ( u 0 , μ 0 ). See Fig. 2 for a schematic.

Fig. 2
figure 2

A schematic showing the relationship expressed by (3). The curve of solutions is solid, and the tangent to this curve at ( μ 0 , u 0 ) is shown by the single-headed arrow. The dashed line is at right angles to this tangent

The pseudo-arclength stepsize Δs is chosen by the user, and as with many numerical schemes there is a tradeoff. Using a small stepsize will result in better coverage of the curve and faster convergence of Newton’s method (below), but at the expense of more calculations. To obtain ( u ˙ 0 , μ ˙ 0 ) we differentiate g(u;μ)=0 with respect to arclength and then evaluate at ( μ 0 , u 0 ):

g u | ( μ 0 , u 0 ) u ˙ 0 + g μ | ( μ 0 , u 0 ) μ ˙ 0 =0,
(4)

where the subscripts indicate partial derivatives. A solution of this is then normalised. To find ( μ 1 , u 1 ) we solve g( u 1 ; μ 1 )=0 and (3) simultaneously using the following Newton iteration:

( u 1 ( i ) μ 1 ( i ) )=( u 1 ( i 1 ) μ 1 ( i 1 ) ) J ( i 1 ) 1 ( g ( u 1 ( i 1 ) , μ 1 ( i 1 ) ) ( u 1 ( i 1 ) u 0 ) u ˙ 0 + ( μ 1 ( i 1 ) μ 0 ) μ ˙ 0 Δ s ),
(5)

for i=1,2,, N N , where

J ( i ) =( g u g μ u ˙ 0 μ ˙ 0 ),
(6)

and the partial derivatives are evaluated at ( μ 1 ( i ) , u 1 ( i ) ). We take N N Newton iterations and (assuming that (5) has converged) set ( μ 1 , u 1 )=( μ 1 ( N N ) , u 1 ( N N ) ). As an initial condition we can take ( μ 1 ( 0 ) , u 1 ( 0 ) )=( μ 0 + μ ˙ 0 Δs, u 0 + u ˙ 0 Δs), i.e. the point where the tangent line meets the dashed line in Fig. 2. This point can be regarded as the result of a linear prediction, and Newton’s method (5) regarded as a corrector of this prediction. The stability of the fixed point ( μ 1 , u 1 ) depends on the sign of g u evaluated at this point, and this has already been calculated as the top left entry in the Jacobian J.

To find the next point along the curve, ( μ 2 , u 2 ) we use the Newton iteration

( u 2 ( i ) μ 2 ( i ) )=( u 2 ( i 1 ) μ 2 ( i 1 ) ) J ( i 1 ) 1 ( g ( u 2 ( i 1 ) , μ 2 ( i 1 ) ) ( u 2 ( i 1 ) u 1 ) u ˙ 1 + ( μ 2 ( i 1 ) μ 1 ) μ ˙ 1 Δ s ),
(7)

for i=1,2,, N N , where the Jacobian is given by (6) and the partial derivatives are evaluated at ( μ 2 ( i ) , u 2 ( i ) ). The tangent vector ( u ˙ 1 , μ ˙ 1 ) can be calculated using the analogue of (4), or by using the approximation

u ˙ 1 u 1 u 0 Δ s ; μ ˙ 1 μ 1 μ 0 Δ s .
(8)

This process can then be continued to find as many points on the curve as required. Note the following points:

  • Pseudo-arclength continuation follows a curve of solutions in parameter/state space and can follow such a curve through a saddle-node bifurcation, even though such bifurcations can be thought of as involving the annihilation of two solutions. This is its main advantage over natural parameter continuation, for example, which fails at such a point [4, 17].

  • Consider the structure of the equations being solved in (5). The first equation is g(u,μ)=0, and the last is the pseudo-arclength condition. This structure will be repeated in following sections.

  • As presented, this method will find points in one direction along the curve given by g(u,μ)=0. If points in the other direction are required, simply replace the tangent vector by its negative, i.e. ( u ˙ 0 , μ ˙ 0 )( u ˙ 0 , μ ˙ 0 ) when calculating ( μ 1 , u 1 ), and then continue as above.

  • A given problem may have fixed points that lie on a closed curve, as in Fig. 1, or on an unbounded curve.

  • There are a number of refinements that could be made to this algorithm to increase its efficiency. For example, one could terminate the Newton iterations once a solution has been found to within some accuracy, rather than after a fixed number of iterations [4]. One could also adapt the stepsize as the solution curve is traced out to avoid unnecessary iterations of Newton’s method [4, 17]. Another refinement is that if u and μ are of very different magnitudes it may be beneficial to scale one of them. One way to do this is to replace (3) by

    θ 2 ( u 1 u 0 ) u ˙ 0 +( μ 1 μ 0 ) μ ˙ 0 Δs=0,
    (9)

    where 0<θ1 if typical values of u are much larger than those of μ, and 1θ if the opposite is true.

  • The algorithm above involves two nested loops. The inner loop finds a point on the curve of solutions, and the outer one steps along the curve.

The interested reader is encouraged to reproduce Fig. 1 using the method outlined above, and then to explore further. (See software at http://www.massey.ac.nz/~crlaing/code.htm.)

3 One-Dimensional Models

We now consider several types of pattern that occur in neural field models in one spatial dimension. Such models are used to study macroscopic pattern formation in the cortex, and take the form of nonlocal differential equations. For more background on such models see [16, 1822], and the recent review [23]. We first consider stationary patterns.

3.1 Stationary Patterns

Consider a typical neural field model on a circular domain:

u ( x , t ) t =u(x,t)+ π π w(xy)f ( u ( y , t ) h ) dy,
(10)

where w is an even function, and

f(u)= 1 1 + e β u
(11)

is a sigmoidal function, where β>0 is a steepness parameter. The variable u(x,t) is the neural field at position x and time t and represents the activity of a population of neurons at that point. The function w(xy) represents how neurons at position y affect those at position x, i.e. the network’s connectivity. Its evenness is a manifestation of the isotropy of the domain, i.e. that there is no preferred direction around the domain. The function f is referred to as the firing rate function, converting activity, u, to firing frequency, f(u), and h is a firing threshold.

For concreteness, we will use the Mexican-hat connectivity function

w(x)=10exp ( 4 x 2 ) 6exp ( x 2 ) ;πxπ,
(12)

as shown in Fig. 3. Stationary solutions of (10) satisfy

u(x)= π π w(xy)f ( u ( y ) h ) dy,
(13)

and it is well known that in the limit β (i.e. the firing rate function f tends to the Heaviside) for a range of values of h, equation (13) supports two “bump” solutions, one of which is stable and the other unstable, under the dynamics of (10) [16, 20]. (Such bump solutions are thought to play a role in short term memory [24].) The two bumps annihilate one another in a saddle-node bifurcation as h is increased. Suppose we wanted to investigate the same phenomenon but for finite β, say β=20.

Fig. 3
figure 3

The connectivity function w(x)=10exp(4 x 2 )6exp( x 2 )

The first thing to note is that both (10) and (13) are invariant under translations, i.e. having found one solution, u(x) of (13), any translate, u(x+a), aR, is also a solution [25]. We want only one from this infinite family of solutions, so we need a way to select only one. A simple way to do this is to consider only even functions, i.e. functions for which u(x)=u(x). Many steady states of equations like (10) are found to be even, but not all of them must be [26].

To represent u(x) we expand it as a Fourier series. Keeping in mind that it is an even function, we write

u(x,t)= i = 0 u i (t)cos(ix).
(14)

The function w(x) is also even, and we write it as a Fourier series

w(x)= i = 0 w i cos(ix),
(15)

where the w i can be found in the usual way: multiplying (15) by cos(jx) and integrating over [π,π]. Using the identity cos(AB)=cosAcosB+sinAsinB, substituting the two Fourier series above into (10) and equating like terms we find

d u j d t = u j + w j π π cos(jy)f ( i = 0 u i cos ( i y ) h ) dy
(16)

for j=0,1,2,, i.e. an infinite number of ordinary differential equations. The steady states of these satisfy

u j + w j π π cos(jy)f ( i = 0 u i cos ( i y ) h ) dy=0
(17)

for j=0,1,2,. In practice we can only solve for a finite number of u j , say N, so we solve

u j + w j π π cos(jy)f ( i = 0 N 1 u i cos ( i y ) h ) dy=0
(18)

for j=0,1,2,,N1. Note that if w is given exactly by a finite Fourier series, i.e. w i =0 for i> N F , then at steady state u i =0 for i> N F , and the truncation of (18) at N1= N F will not introduce any errors, as noted by a number of authors [2729].

We now show how to find and follow solutions of (18) as h is varied. Let v be the N-dimensional column vector with components u 0 , u 1 ,, u N 1 . The N scalar equations (18) can then be written in vector form as

F(v,h)=0,
(19)

where each component of this equation corresponds to one equation in (18), and F: R N ×R R N . For fixed h, say h 0 , within some interval, we expect (19) to have several, isolated, solutions. One or more of these may be stable, and can be found by integrating (10) forward in time. Alternatively, a solution of (19) may be found using Newton’s method. Suppose one of these solutions is v 0 , and we want to find another nearby solution, ( v 1 , h 1 ), which satisfies (19). The pseudo-arclength condition analogous to (3) is

( v 1 v 0 ) T v ˙ 0 +( h 1 h 0 ) h ˙ 0 Δs=0,
(20)

where a superscript “T” indicates transpose. This just states that ( v 1 , h 1 ) lies on the hyperplane perpendicular to the vector ( v ˙ 0 , h ˙ 0 ), which at its closest point is a distance Δs from ( v 0 , h 0 ). Analogously to (4), we see that the (N+1)-dimensional column vector

( v ˙ 0 h ˙ 0 )
(21)

is the null vector of the N×(N+1) matrix ( F v | F h ) where subscripts indicate partial derivatives (i.e. F v is the N×N Jacobian of F with respect to v and F h is a column vector of derivatives with respect to h) and these derivatives are evaluated at ( v 0 , h 0 ). Thus once the vector (21) has been found and normalised, it can be used in (20).

We solve F( v 1 , h 1 )=0 and (20) simultaneously using the analogue of (5), namely

( v 1 ( i ) h 1 ( i ) )=( v 1 ( i 1 ) h 1 ( i 1 ) ) J ( i 1 ) 1 ( F ( v 1 ( i 1 ) , h 1 ( i 1 ) ) ( v 1 ( i 1 ) v 0 ) T v ˙ 0 + ( h 1 ( i 1 ) h 0 ) μ ˙ 0 Δ s )
(22)

for i=1,2,, N N , where

J ( i ) =( F v F h v ˙ 0 h ˙ 0 )
(23)

is the (N+1)×(N+1) Jacobian of the augmented system, and the partial derivatives are evaluated at ( v 1 ( i ) , h 1 ( i ) ). As above, we take N N Newton iterations and (assuming that (22) has converged) set v 1 = v 1 ( N N ) and h 1 = h 1 ( N N ) . As an initial condition we can take v 1 ( 0 ) = v 0 + v ˙ 0 Δs and h 1 ( 0 ) = h 0 + h ˙ 0 Δs. The stability of the fixed point ( v 1 , h 1 ) depends on the eigenvalues of F v evaluated at this point, and this matrix has already been calculated as the top left N×N block in the Jacobian J. We find the next solution ( v 2 , h 2 ) in exactly the same way as in Sect. 2, and can also use the approximation (8) if desired.

The results of following solutions of (19) as h is varied are shown in Fig. 4. In the top panel we see the stable and unstable solutions annihilating one another as h is increased, as expected. Representative solutions are shown in panel (b), and the most positive eigenvalue associated with the stability of solutions is shown in panel (c). Note that the zero eigenvalue associated with the translational invariance of solutions of (10) does not appear, as it has been removed by our choice of only considering even functions. Calculations were done with N=15, and the integrals in (18) were performed using the trapezoidal rule, which is extremely accurate on a periodic domain [30].

Fig. 4
figure 4

(a) Solutions of (19). The norm of v is plotted vertically; solid curve indicates stable solutions, dashed unstable. (b) Even solutions of (13) at the two points indicated in panel (a). (c) Most positive eigenvalue associated with solutions of (19). Parameters: N=15, β=20

Several points should be made to end this section:

  • An alternative to discretising (10) using Fourier series is to discretise the spatial domain [π,π] directly, using a uniform grid. The integral in (10) can then be evaluated using the trapezoidal rule. Alternatively, since the integral is a convolution, it can be evaluated efficiently using the fast Fourier transform and multiplication in frequency space. Using this type of discretisation it is still straightforward to restrict to even functions. Essentially, one works with the function defined on only half of the domain ([π,0]) and imposes evenness when necessary.

  • As presented we can only find even solutions, and only determine stability with respect to perturbations that are also even. We will thus not detect any bifurcations leading to solutions which are not even.

  • If we were interested in solutions that were not even, we could include sine terms in (14), substitute into (10), and derive the differential equations governing the evolution of their coefficients. We would then have to find some way of removing the translational invariance of solutions.

  • The relationship between the symmetry of the system (i.e. its invariance with respect to group actions) and methods for choosing one from a continuous family of related solutions is discussed in more detail in [25, 31].

  • Several other references dealing with continuation of high-dimensional problems in different contexts are [3235].

We now consider moving patterns in one spatial dimension.

3.2 Moving Patterns

Let us again consider the model

u ( x , t ) t =u(x,t)+ w(xy)f ( u ( y , t ) h ) dy,
(24)

with f given by (11) but now with coupling function

w(x)=(1/2) e | x | .
(25)

Since w(x)dx=1 we see that spatially uniform steady states of (24) satisfy

u=f(uh).
(26)

Plotting u and f(uh) on the same graph for several values of h, as in Fig. 5, we see that for moderate values of h, three such steady states exist, but if |h| is too large, only one exists. Let us choose a value of h such that three steady states exist, u 1 < u 2 < u 3 , and initialise part of the domain at u= u 1 and the rest at u= u 3 . Doing so we see the behaviour in Fig. 6, i.e. a front develops which connects these two steady states and moves with a constant speed in the direction of increasing x. We would like to find the shape of this front, and how its properties (speed, stability) depend on a parameter, say h.

Fig. 5
figure 5

Solutions of (26) occur when the function f(uh) intersects the diagonal, where f is given by (11) and β=20

Fig. 6
figure 6

Top: space–time plot of a solution of (24)–(25), where the domain has been truncated to [0,50]. Colour indicates the value of u. Bottom: front at t=15. Parameters: β=20, h=0.3

To use the ideas above, we need to formulate an algebraic equation (or equations) satisfied by this front. We do this by moving to a travelling coordinate frame with the same speed as the front. Letting ξ=xct, equation (24) becomes

u ( ξ , t ) t =c u ( ξ , t ) ξ u(ξ,t)+ w(ξy)f ( u ( y , t ) h ) dy.
(27)

If c is the speed of the front, then the front is stationary in the (ξ,t) coordinate system, i.e. it satisfies

0=c d u d ξ u+ w(ξy)f ( u ( y ) h ) dy.
(28)

A natural way to solve (28) is to truncate the domain and then discretise it using, say, N equally spaced grid points. The integral can be evaluated as in Sect. 3.1 and the spatial derivative using finite differences, or the fast Fourier transform [30]. However, imposing (28) at each grid point would give N equations, but there are N+1 unknowns, namely the value of u at each grid point, and the speed c. This is just a reflection of the fact that (24) is translationally invariant and thus there is a continuous family of front solutions satisfying (28), each differing from one another by a simple translation. As in Sect. 3.1 we need a way to choose just one of these. One simple way to do this is to impose the pinning condition

(u u ˆ ) u ˆ ξ dξ=0,
(29)

where u ˆ is a template function. Equation (29) is the result of minimising the L 2 norm of the difference between u and u ˆ [36, 37], so u ˆ should be somewhat similar to the solution one is trying to find. We can thus solve (28) and (29) simultaneously for u at each grid point and the speed c. Letting v be the (N+1)-dimensional column vector whose first N components are u at each of the N grid points, and whose (N+1)th component is c, equations (28) and (29) can be written as

F(v,h)=0,
(30)

where F: R N + 1 ×R R N + 1 . Once a pseudo-arclength condition like (20) has been appended, solutions of this set of N+2 equations can be followed just as in Sect. 3.1. To find the stability of a front found in this way at a particular value of c, we need to find all eigenvalues of the linearisation of (28) about the front. This linearisation appears as the top left N×N block in the Jacobian of the augmented system. Note that this linearisation has a zero eigenvalue with eigenvector equal to the (discretised) spatial derivative, u/ξ. Stability of the front is determined by eigenvalues other than this one. (The stability of a wave in the original, i.e. non-discretised, system involves determining a continuous spectrum [38], and the discrete set of eigenvalues we find is an approximation to that.)

The results of following the front as h is varied are shown in Fig. 7. We used 1,000 evenly spaced points over a domain of [0,50], and the template function was u ˆ (ξ)=0.5(1+tanh(25ξ)). We see that for a range of h values a stable front exists, and its speed may be positive, negative or zero, depending on the value of h. The stable front is destroyed in saddle-node bifurcations as h is varied too far from h=0.5 at the same values at which the spatially uniform steady states undergo saddle-node bifurcations, i.e. the stable front only exists when there are two stable spatially uniform steady states, which it connects. (The continuation was stopped after a number of points on each of the unstable branches were found.) Although we have only considered a front solution, the techniques described in this section can be used to follow other moving patterns such as “bumps” or pulses [27, 28, 37, 39, 40]. We now consider solutions in two spatial dimensions.

Fig. 7
figure 7

Speed of a front, c, as a function of h. Solid: stable, dashed: unstable. Parameters: β=20

4 Two-Dimensional Models

We consider the model

u ( x , y , t ) t = A 0 L 0 L w ( x x , y y ) f ( u ( x , y , t ) h ) d x d y u ( x , y , t ) a ( x , y , t ) ,
(31)
τ a ( x , y , t ) t =Bu(x,y,t)a(x,y,t),
(32)

defined on the square domain [ 0 , L ] 2 with periodic boundary conditions in both x and y. As well as the activity variable u we also have a “recovery” (or “adaptation”) variable a, which is driven by u with strength B on a timescale τ. The firing rate function is given by (11) and we use

w(x,y)=exp ( ( x 2 + y 2 ) ) 0.17exp ( 0.2 ( x 2 + y 2 ) ) ,
(33)

i.e. Mexican-hat connectivity, as shown in Fig. 8. When a=B=0, equations (31)–(32) are capable of supporting spatially localised “bump” solutions, and when the influence of a is strong enough (i.e. B is increased from zero) these bumps travel at a constant speed. An example is shown in Fig. 9. The bump is moving to the left (i.e. in the negative x direction) at a constant speed. Both u and a fields have a single peak, and the maximum of a is slightly behind that of u. We would like to find the dependence of this bump on various parameters in the model.

Fig. 8
figure 8

The Mexican-hat coupling function (33)

Fig. 9
figure 9

Leftward-moving bump solutions of (31)–(32). Left column: u, right column: a. Parameters: L=15, A=2, β=5, h=0.8, B=0.4, τ=3

The first step is to move to a travelling coordinate frame in which the bump is stationary, as in Sect. 3.2. In principle the bump can move in any direction, but we specify that it is moving in the negative x direction. Letting ξ=x+ct, (31)–(32) become

u ( ξ , y , t ) t = A 0 L 0 L w ( ξ x , y y ) f ( u ( x , y , t ) h ) d x d y u ( ξ , y , t ) a ( ξ , y , t ) c u ( ξ , y , t ) ξ ,
(34)
τ a ( ξ , y , t ) t =Bu(ξ,y,t)a(ξ,y,t)cτ u ( ξ , y , t ) ξ ,
(35)

and if c (>0) is the speed of the bump, it will be a stationary solution of (34)–(35), i.e. satisfy

0 = A 0 L 0 L w ( ξ x , y y ) f ( u ( x , y ) h ) d x d y u ( ξ , y ) a ( ξ , y ) c u ( ξ , y ) ξ ,
(36)
0=Bu(ξ,y)a(ξ,y)cτ u ( ξ , y ) ξ .
(37)

Now any solution of (36)–(37) can be translated by an arbitrary amount in either the ξ or y direction, so we need to remove these degeneracies. We first impose the requirement that the solution be symmetric about y=L/2, i.e. u(ξ,yL/2)=u(ξ,L/2y) and similarly for a. We then impose a scalar condition which removes the invariance with respect to translations in the ξ direction. There are many ways to do this, but a simple and robust one is to specify that the value of u in the centre of the domain is equal to its average over ξ at y=L/2, i.e.

u(L/2,L/2) 1 L 0 L u(ξ,L/2)dξ=0.
(38)

(A condition like (29), imposed at y=L/2, could also be used. Note that since solutions of (36)–(37) are invariant under translations in two directions, we need to impose two conditions to remove these degeneracies.) We discretise the square domain with N equally spaced points in both directions. Imposing that (36) and (37) are satisfied at each of these N 2 points gives 2 N 2 equations, and combined with (38) we have 2 N 2 +1 equations and the same number of unknowns. These equations can be written in the form

F(v,λ)=0,
(39)

where F: R 2 N 2 + 1 ×R R 2 N 2 + 1 and λ is one of the parameters of the system. Appending a pseudo-arclength condition to (39) one can follow solutions as λ is varied. (Note that (39) is typically a very large system of equations; see below.) Typical results are shown in Fig. 10 where we vary A. A stable bump is destroyed in saddle-node bifurcations as A is made either too large or too small. (The continuation was stopped after a number of points on each of the unstable branches were found.)

Fig. 10
figure 10

Speed of a two-dimensional bump, c, as a function of A. Solid: stable, dashed: unstable. Parameters: L=15, β=5, h=0.8, B=0.4, τ=3, N=256

Note the following points:

  • The convolution in (36) is evaluated using the two-dimensional Fast Fourier Transform (FFT), i.e. the FFT of (the discretisations of) both w and f(uh) are taken, they are multiplied together, and then the inverse FFT is taken. Partial derivatives are evaluated using finite difference approximations, but could also be implemented using the FFT [30].

  • Let us append a pseudo-arclength condition to (39) and define a vector V R 2 N 2 + 2 by concatenating v and λ. The resulting set of equations can be written G(V)=0. Given a solution of (39), finding the next point along this curve amounts to solving G(V)=0 using Newton’s method, i.e. iterating

    V n + 1 = V n J 1 G( V n );n=0,1,2,,
    (40)

    where J is the Jacobian of G, evaluated at V n . For a reasonable discretisation, a large value of N is needed and hence J will be too big to store, let alone invert, so instead we write (40) as

    J Δ n =G( V n ),
    (41)

    where Δ n = V n V n + 1 . This is a linear equation for the unknown Δ n , but instead of solving it directly one can solve it iteratively, using for example the GMRES algorithm [41, 42]. Some implementations of the GMRES algorithm, e.g. that in Matlab, do not require the Jacobian J to be explicitly formed, only that one can evaluate the product of J with an arbitrary vector, ϕ. This can be done for a general problem in a matrix-free way with one extra evaluation of G, using the finite difference approximation

    Jϕ G ( V n + ϵ ϕ ) G ( V n ) ϵ ,0<ϵ1.
    (42)

    Similarly, we need the eigenvalues of J, or at least a few with the largest real part, to determine stability. The Matlab function eigs does not need J, only its product with an arbitrary vector, which can be implemented as above.

    Note that for the particular problem considered here, the product of J with an arbitrary vector can be calculated exactly without the need for the approximation (42), as explained by Rankin et al. [43]. These authors used GMRES to follow stationary solutions of neural field equations in two dimensions, but the results here may be the first for travelling solutions.

  • As well as travelling bumps [44], patterns that appear in two spatial dimensions include stationary groups of bumps [19, 37, 43], “breathing” bumps [45], rings and rotating groups of bumps [46], waves [47], spirals [48, 49] and target patterns [45, 50]. While stationary patterns and those that propagate at a constant velocity (either translational or rotational) can be dealt with using the ideas in this section, patterns such as breathing bumps and target waves are intrinsically periodic in time, and thus must be dealt with using slightly different techniques.

  • We have evaluated the double integral in (31) directly using fast Fourier transforms, but some early progress on two-dimensional neural fields was made using other Fourier techniques [19, 48], and see [51]. For example, suppose that the Fourier transform of w was

    1 s 4 + s 2 + 1 ,
    (43)

    where s 2 = k x 2 + k y 2 and k x and k y are the two transform variables. Taking the two-dimensional Fourier transform of (31) we obtain

    u ˆ t + u ˆ + a ˆ = A f ˆ s 4 + s 2 + 1 ,
    (44)

    where the hat indicates the Fourier transform. Multiplying (44) by s 4 + s 2 +1, and taking the inverse Fourier transform, using a Fourier transform identity we obtain

    ( 4 2 + 1 ) ( u ( x , y , t ) t + u ( x , y , t ) + a ( x , y , t ) ) =Af ( u ( x , y , t ) h ) ,
    (45)

    which is formally equivalent to (31) but only involves derivatives. The advantage of this formulation is that solutions of (36)–(37) satisfy

    ( 4 2 + 1 ) ( c u ξ + u + a ) =Af(uh),
    (46)
    cτ u ξ =Bua,
    (47)

    which only involve derivatives. Finite difference approximations to these derivatives can then be implemented using sparse matrices, thus removing the need to store and manipulate large full matrices. This idea has subsequently been used by several other groups [47, 50].

    Using this method, the coupling function is assumed to be a function of only distance in two dimensions and is given by

    w(r)= 0 s J 0 (rs) w ˆ (s)ds,
    (48)

    where J 0 is the Bessel function of the first kind of order zero and w ˆ (s) is the Fourier transform of w (in the case above, w ˆ (s)=1/( s 4 + s 2 +1)).

We now discuss a number of extensions to the ideas presented here.

5 Extensions

5.1 Delays

We have considered differential equations where the derivatives depend on only the values of the variables at the present time. However, delays are ubiquitous in neural systems [5256] (and elsewhere) so the study of delay differential equations naturally arises. Such systems can be numerically integrated using, for example, Matlab’s dde23, but following periodic orbits and determining the stability of fixed points is much more involved than for non-delayed systems, due to the infinite-dimensional nature of the problem, even for a scalar equation. The software package DDE-BIFTOOL [11] is useful for performing such calculations, and also see [57].

5.2 Global Bifurcations

Following solutions using pseudo-arclength continuation and determining their stability via linearisation about them will only detect local bifurcations such as saddle-node and Hopf. However, global bifurcations can also play a role in determining the dynamics of a system [5860]. They also provide another way of viewing spatially localised “bumps,” or fronts. As an example, consider (28), writing x instead of ξ. Using the fact that the coupling function (25) is the Green function of (1 d 2 /d x 2 ), acting on (28) with this operator gives

0= ( 1 d 2 d x 2 ) ( c d d x 1 ) u+f ( u ( x ) h ) ,
(49)

i.e.

c u u c u +u=f(uh),
(50)

where the primes indicate derivative with respect to space. (Another way to derive (49) is to take Fourier transforms of (28), use the fact that the transform of w is 1/(1+ k 2 ) where k is the transform variable, then rearrange and take the inverse Fourier transform [22].) This ordinary differential equation has fixed points which satisfy u=f(uh) and the front in Fig. 6 is a heteroclinic connection between two of these fixed points. Now (50) can be written

u =v,
(51)
v =z,
(52)
z = f ( u h ) c u c +v+ z c ,
(53)

with Jacobian at the fixed point (u,v,z)=( u ? ,0,0) given by

J=( 0 1 0 0 0 1 f ( u ? h ) 1 c 1 1 c ).
(54)

The eigenvalues of this, λ, satisfy

λ 3 λ 2 /cλ+ ( 1 f ( u ? h ) ) /c=0.
(55)

For h0.5, f ( u ? h)0, where u ? is either the lower fixed point, u 1 , or the upper one, u 3 (see Fig. 5 and Sect. 3.2) and thus (55) can be written approximately as

(λ1)(λ+1)(λ1/c)=0,
(56)

i.e. both fixed points have a two-dimensional unstable manifold and one-dimensional stable manifold (for c>0). A heteroclinic connection between the fixed points occurs when the unstable manifold of one intersects the stable manifold of the other, which is a codimension-one event for this system, i.e. it will generically occur at isolated values of the parameter c. These values are those shown in Fig. 7. They can be found by “shooting”: numerically integrating (51)–(53) backwards using an initial condition on the stable manifold of one fixed point, and varying c until this trajectory intersects the unstable manifold of the other fixed point. If c is negative the dimensions of the stable and unstable manifolds are interchanged, but the argument above still applies.

Notes:

  • In the same way that a front can be viewed as a heteroclinic connection between two fixed points, a spatially localised pulse can be viewed as a homoclinic orbit to a fixed point. This applies whether the pulse is stationary [18, 61, 62] or moving [39] (in which case the speed appears as a parameter, as above).

  • Software for the continuation of homoclinic and heteroclinic orbits exists [10, 63].

  • The conversion of an integral equation like (28) to a differential equation via Fourier transform has been used by a number of authors [18, 61, 62, 64]. For this technique to work, the Fourier transform of the coupling function should be a rational function of the square of the transform variable.

  • The resulting differential equations sometimes have additional struction (they are Hamiltonian, for example) and this can be exploited in their analysis [18, 61].

  • Solutions which are periodic in space may also be of interest, and it may be easier to find them by considering periodic solutions of a differential equation of the form (50) rather than the equivalent integral equation (28).

5.3 Following Bifurcations

So far we have followed solutions of algebraic equations as a single parameter has varied. Local bifurcations occur at isolated values of this parameter. However, it is often more informative to follow these bifurcations as a second parameter is varied. To demonstrate this we return to the problem in Sect. 3.1, i.e. we look for stationary solutions of (10) where f is given by (11) but the coupling function is

w(x)=10exp ( 4 x 2 ) Bexp ( x 2 ) .
(57)

The analysis in Sect. 3.1 corresponds to the case B=6, but we would like to know the effect of varying B on the range of values of h for which stable solutions exist (see Fig. 4). We proceed as in Sect. 3.1 but the coefficients w i are now functions of B, i.e. we solve

u j + w j (B) π π cos(jy)f ( i = 0 N 1 u i cos ( i y ) h ) dy=0
(58)

for j=0,1,2,,N1. This set of equations can be written compactly as

F(v,h,B)=0,
(59)

as before, where F: R N ×R×R R N . For a given h and B, a solution of (59) is a stationary solution of (10). However, for a fixed B we would like to find the value of h at which the saddle-node bifurcation occurs, as in Fig. 4. At this point the Jacobian of F has a zero eigenvalue. The eigenvector ϕ R N corresponding to this eigenvalue then satisfies

Jϕ=0,
(60)

where J is the N×N Jacobian. Now ϕ is unique up to a scaling of its magnitude so to choose a particular ϕ we impose

ϕ T ϕ1=0,
(61)

i.e. that ϕ has unit length. Thus for a fixed B, the value of h at which the saddle-node bifurcation occurs (and the solution at that point, v) satisfy

F(v,h,B)=0,
(62)
Jϕ=0,
(63)
ϕ T ϕ1=0.
(64)

This is a set of 2N+1 equations in 2N+1 unknowns (the N components of v, the N components of ϕ, and h). Note that J depends on v, h and B, and that the product of J and ϕ can be found easily using the method discussed in Sect. 4. Concatenating v, ϕ and h to form the (2N+1)-dimensional vector W, equations (62)–(64) can be written

H(W,B)=0,
(65)

where H: R 2 N + 1 ×R R 2 N + 1 . Solutions of (65) can be followed as B is varied in the usual way, and plotting the values of B found versus the corresponding element of the W s, the curve of saddle-node bifurcations can be traced out in the (h,B) plane. The results of doing this are shown in Fig. 11. We see that as B is decreased, the value of h at which the saddle-node bifurcation occurs increases and vice versa.

Fig. 11
figure 11

Curve of saddle-node bifurcations of stationary single-bump solutions of (10), where the coupling function is (57). A stable and unstable pair exist to the left of this curve. Figure 4(a) is a horizontal “slice” through this figure at B=6. Parameters: N=15, β=20

Although we have not shown any here, Hopf bifurcations can also occur in neural field models, leading to oscillatory behaviour [20, 22, 65, 66]. They are characterised by a pair of complex conjugate eigenvalues of the Jacobian passing through the imaginary axis. Curves of Hopf bifurcations can be followed as two parameters are varied in a similar way to that explained above for saddle-node bifurcations, the main difference being that in the simplest formulation there are O(3N) equations to solve rather than O(2N), as both the real and the imaginary parts of the corresponding equations have to be solved [8, 67]. Note that more sophisticated algorithms can reduce the number of equations to be solved when following both saddle-node and Hopf bifurcations [4].

5.4 Maps

Pseudo-arclength continuation is a method for following solutions of algebraic equations as a parameter is varied. In this paper we have used algebraic equations which define stationary solutions of differential equations, in either a stationary or uniformly travelling coordinate frame. However, for some dynamical systems we may be interested in fixed points of a discrete-time map which may correspond to, for example, periodic orbits of an underlying system [6870] (and see below). The equations defining these fixed points are also algebraic and can thus be treated using the same methods as discussed above. Note that the criteria for stability of a fixed point of a differential equation is different from that of a fixed point of a map [5860].

5.5 Periodic Orbits

We have only considered stationary solutions of differential equations, but many differential equations have solutions which are periodic in time, arising from, for example, a Hopf bifurcation [58, 59]. Periodic forcing, in either time or space [71, 72], can also generate solutions which are periodic in time or space, respectively. Finding and following periodic orbits can be done using pseudo-arclength continuation. The main idea is as before: construct a set of algebraic equations which are satisfied by the periodic orbit. Such an orbit is represented in a finite-dimensional way using, for example, a finite Fourier series expansion, or more commonly and efficiently, a piecewise polynomial function [59, 73] for each variable. This representation of the orbit is then substituted into the governing differential equations, giving a set of algebraic equations that must be satisfied. For autonomous differential equations (i.e. ones which do not explicitly depend on time) there is an invariance with respect to time shifts, in the same way that we saw invariance with respect to spatial shifts in Sect. 3. This invariance can be removed in the same way, by using a scalar “phase” condition to select one from a continuous family [73].

An alternative method for finding periodic orbits is to put a Poincaré section in the phase space in such a way as to intersect the periodic orbit, which then becomes a fixed point of a map defined from the section to itself. As an example, consider the FitzHugh–Nagumo equations [74]

d v d t = v ( v + 0.1 ) ( 1 v ) w + 1 0.1 ,
(66)
d w d t =v0.5w,
(67)

which have a stable periodic orbit. Figure 12 shows the stable periodic orbit and a transient. We define the Poincaré section

Σ= { ( v , w ) : v > 0.5 , w = 1 } ,
(68)

and choose an initial condition on it, (v,w)=( V 0 ,1). ( V 0 =1.2 in Fig. 12.) Now integrate (66)–(67) using this initial condition until the solution is again in Σ, at (v,w)=( V 1 ,1). Repeat, using this initial condition, and thus generate a sequence V 0 , V 1 , V 2 , which are iterates of a map:

V i =ψ( V i 1 ),i=1,2,
(69)

The periodic orbit in Fig. 12 is a fixed point of this map, defined by V =ψ( V ), an algebraic equation whose solutions can be followed as parameters are varied. Note that ψ is defined using numerical integration of a set of differential equations, and this integration must be done accurately for the technique to be successful [68]. This method will find an unstable periodic orbit, as long as the initial condition is close enough to it. A modification of this approach involves “multiple shooting,” where several Poincaré sections are put in phase space, and one successively integrates from one to the next and concatenates the results [59, 75].

Fig. 12
figure 12

Finding a periodic orbit of (66)–(67) using a Poincaré section, Σ. The arrow indicates the direction of motion. See text for details

This technique, of using numerical integration of the underlying system over short time intervals to find both stable and unstable objects, is an example of bifurcation analysis using timesteppers [76]. It forms the basis of the “equation-free” approach, which we now discuss.

5.6 Equation-Free

When studying systems such as (1) it is often assumed that one can explicitly and quickly evaluate g(u;μ) using a subroutine or by calling a Matlab “function file,” for example. However, there is no requirement that g(u;μ) be explicitly specified and as long as, given u and μ, a reasonably accurate estimate of g(u;μ) can be obtained, one can use the ideas presented above. This idea forms part of the “equation-free” approach to studying complex multiscale systems [7779]. We now briefly summarise some of the relevant ideas.

Suppose that we have a well-defined dynamical system

d v d t =ϕ(v;μ),
(70)

where v R n and 1n. The components of v could be all of the variables associated with a network of Hodgkin–Huxley-type neurons, or the positions and velocities of a number of interacting particles for example. We refer to (70) as the “microscopic” or “fine-scale” description of our model. Suppose also that we believe that (70) can be effectively described by the dynamical system

d V d t =Φ(V;μ),
(71)

where V R m and mn. Determining that this is the case—and what the variable V actually is—is a complex topic that we will not go into here, but at its simplest, a particular component of V could be the average of some components of v, for example. By “effectively described” we mean that a bifurcation analysis, or numerical integration, of (71) would give similar results as performing these operations on (70), with any differences being easily explained and unimportant. We refer to (71) as the “macroscopic” or “coarse” description of our model. We would like to perform bifurcation analysis of (71), but we do not have an explicit expression for Φ. To evaluate Φ(V;μ) for particular V and μ we need two operators which map between v and V. The first is a lifting operator L: R m R n such that L(V)=v. We also need a restricting operator R: R n R m such that R(v)=V. These operators must satisfy the consistency condition R(L(V))=V. Since L maps from a low-dimensional space to a high-dimensional one it is often not unique.

To evaluate Φ( V 0 ; μ 0 ) we first lift V 0 to produce v 0 , where L( V 0 )= v 0 . We then initialise the fine-scale model (70) with the initial condition v 0 (and μ= μ 0 ) and simulate (70) for a short amount of time, of duration say Δ, resulting in the state v(Δ). Restricting this gives an estimate of V(Δ), i.e. V(Δ)=R(v(Δ)). We thus have

V ( Δ ) V 0 Δ d V d t | V = V 0 =Φ( V 0 ; μ 0 ).
(72)

In other words, we estimate Φ(V;μ) by running a short “burst” of the microscopic system, suitably initialised, and restricting the result. Since there are normally many different v 0 consistent with a particular V 0 , running many bursts with the different v 0 and then averaging often results in a better estimate of Φ( V 0 ; μ 0 )—this is ideally done in parallel on multiple processors. Thus in principle we can evaluate Φ(V;μ) for any V and μ, and this is all we need to perform bifurcation analysis (or numerical integration) of (71). Note that V can be approximately stationary even though v is not (if V is average firing rate of a network, and v contains voltages of neurons in the network, for example) so when finding “fixed points” of Φ, one may need to relax the criteria for determining when Φ=0.

Often in the equation-free approach “the devil is in the details,” and a number of applications in computational neuroscience are demonstrated in [8082]. As shown in these papers, the choice of V can often be done semi-automatically using techniques from data-mining. (A long simulation of (70) is done, and the results mined to find whether there is a low-dimensional parametrisation of the data set. If so, these parameters form the components of V.) Note that while we have written the microscopic model (70) as a deterministic differential equation, it could equally well be a stochastic differential equation, or even a dynamical system in which both time and state are discrete [79, 83]. An interesting generalisation of the method presented here was shown in [84], where the analogue of the function Φ was evaluated experimentally in real time, rather than by running a computer simulation.

6 Conclusion

Numerical continuation is a powerful technique, allowing one to follow solutions of sets of algebraic equations as a parameter is varied. We have given an introduction to this technique, discussed its use in the investigation of various models arising in computational neuroscience, and demonstrated its use with a number of examples. The technique is general, but we have concentrated on high-dimensional systems which arise as discretisations of neural field models. By suitable modification the technique can be used to follow bifurcations as two parameters are varied, and to follow solutions which are periodic in time, or uniformly translating or rotating. Its use in high-dimensional systems has been restricted in the past by issues of memory and computational time, but with cheaper memory and faster processors continuing to be produced, we expect the technique to continue to be useful for the investigation of complex, high-dimensional dynamical systems.

References

  1. Bressloff PC: Waves in Neural Media. Springer, Berlin; 2014.

    Book  MATH  Google Scholar 

  2. Ermentrout GB, Terman DH 64. In Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.

    Chapter  Google Scholar 

  3. Seydel R 5. In Practical Bifurcation and Stability Analysis. Springer, Berlin; 2010.

    Chapter  Google Scholar 

  4. Govaerts WJ: Numerical Methods for Bifurcations of Dynamical Equilibria. SIAM, Philadelphia; 2000.

    Book  MATH  Google Scholar 

  5. Krauskopf B, Osinga HM, Galán-Vioque J: Numerical Continuation Methods for Dynamical Systems: Path Following and Boundary Value Problems. Understanding Complex Systems. Springer, Berlin; 2007.

    Book  Google Scholar 

  6. Allgower EL, Georg K: Introduction to Numerical Continuation Methods. SIAM, Philadelphia; 2003.

    Book  MATH  Google Scholar 

  7. Beyn W-J, Champneys A, Doedel E, Govaerts W, Kuznetsov YA, Sandstede B: Numerical continuation, and computation of normal forms. Handbook of Dynamical Systems 2. In Handbook of Dynamical Systems. Edited by: Fiedler B. Elsevier, Amsterdam; 2002:149–219.

    Google Scholar 

  8. Salinger AG, Bou-Rabee NM, Pawlowski RP, Wilkes ED, Burroughs EA, Lehoucq RB, Romero LA: Loca 1.0 library of continuation algorithms: theory and implementation manual. Sandia National Laboratories, SAND2002–0396; 2002. Salinger AG, Bou-Rabee NM, Pawlowski RP, Wilkes ED, Burroughs EA, Lehoucq RB, Romero LA: Loca 1.0 library of continuation algorithms: theory and implementation manual. Sandia National Laboratories, SAND2002-0396; 2002.

  9. Doedel EJ, Champneys AR, Fairgrieve TF, Kuznetsov YA, Sandstede B, Wang X: “Auto97,” Continuation and bifurcation software for ordinary differential equations; 1998. Doedel EJ, Champneys AR, Fairgrieve TF, Kuznetsov YA, Sandstede B, Wang X: “Auto97,” Continuation and bifurcation software for ordinary differential equations; 1998.

  10. Dhooge A, Govaerts W, Kuznetsov YA: Matcont: a Matlab package for numerical bifurcation analysis of odes. ACM Trans. Math. Softw. 2003, 29(2):141–164.

    Article  MATH  MathSciNet  Google Scholar 

  11. Engelborghs K, Luzyanina T, Roose D: Numerical bifurcation analysis of delay differential equations using dde-biftool. ACM Trans. Math. Softw. 2002, 28(1):1–21.

    Article  MATH  MathSciNet  Google Scholar 

  12. Ermentrout B 14. In Simulating, Analyzing, and Animating Dynamical Systems: a Guide to XPPAUT for Researchers and Students. SIAM, Philadelphia; 2002.

    Chapter  Google Scholar 

  13. Uecker H, Wetzel D, Rademacher JDM: pde2path - A Matlab package for continuation and bifurcation in 2D elliptic systems. arXiv:1208.3112; 2012.

  14. Dankowicz H, Schilder F: Recipes for Continuation. SIAM, Philadelphia; 2013.

    Book  MATH  Google Scholar 

  15. Wilson HR, Cowan JD: A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik 1973, 13(2):55–80.

    Article  MATH  Google Scholar 

  16. Amari S: Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 1977, 27(2):77–87.

    Article  MATH  MathSciNet  Google Scholar 

  17. Doedel E, Keller HB, Kernevez JP: Numerical analysis and control of bifurcation problems (i): bifurcation in finite dimensions. Int. J. Bifurc. Chaos 1991, 1(03):493–520.

    Article  MATH  MathSciNet  Google Scholar 

  18. Laing C, Troy W, Gutkin B, Ermentrout G: Multiple bumps in a neuronal model of working memory. SIAM J. Appl. Math. 2002, 63: 62.

    Article  MATH  MathSciNet  Google Scholar 

  19. Laing C, Troy W: PDE methods for nonlocal models. SIAM J. Appl. Dyn. Syst. 2003, 2(3):487–516.

    Article  MATH  MathSciNet  Google Scholar 

  20. Pinto DJ, Ermentrout GB: Spatially structured activity in synaptically coupled neuronal networks: II. lateral inhibition and standing pulses. SIAM J. Appl. Math. 2001, 62(1):226–243.

    Article  MATH  MathSciNet  Google Scholar 

  21. Bressloff P: Spatiotemporal dynamics of continuum neural fields. J. Phys. A, Math. Theor. 2012., 45(3): Article ID 033001 Article ID 033001

  22. Coombes S: Waves, bumps, and patterns in neural field theories. Biol. Cybern. 2005, 93(2):91–108.

    Article  MATH  MathSciNet  Google Scholar 

  23. Coombes S, beim Graben P, Potthast R, Wright J (Eds): Neural Fields: Theory and Applications. Springer, Berlin; 2014.

    Google Scholar 

  24. Wimmer K, Nykamp DQ, Constantinidis C, Compte A: Bump attractor dynamics in prefrontal cortex explains behavioral precision in spatial working memory. Nat. Neurosci. 2014, 17: 431–439.

    Article  Google Scholar 

  25. Beyn W-J, Thümmler V: Freezing solutions of equivariant evolution equations. SIAM J. Appl. Dyn. Syst. 2004, 3(2):85–116.

    Article  MATH  MathSciNet  Google Scholar 

  26. Elvin AJ: Pattern formation in a neural field model. PhD thesis. Auckland, New Zealand, Massey University; 2008 Elvin AJ: Pattern formation in a neural field model. PhD thesis. Auckland, New Zealand, Massey University; 2008

  27. Laing C, Longtin A: Noise-induced stabilization of bumps in systems with long-range spatial coupling. Phys. D, Nonlinear Phenom. 2001, 160(3):149–172.

    Article  MATH  MathSciNet  Google Scholar 

  28. Ermentrout B, Folias SE, Kilpatrick ZP: Spatiotemporal pattern formation in neural fields with linear adaptation. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.

    Google Scholar 

  29. Kilpatrick ZP: Coupling layers regularizes wave propagation in stochastic neural fields. Phys. Rev. E 2014., 89(2): Article ID 022706 Article ID 022706

  30. Trefethen L 10. In Spectral Methods in MATLAB. SIAM, Philadelphia; 2000.

    Chapter  Google Scholar 

  31. Rowley CW, Kevrekidis IG, Marsden JE, Lust K: Reduction and reconstruction for self-similar dynamical systems. Nonlinearity 2003, 16(4):1257.

    Article  MATH  MathSciNet  Google Scholar 

  32. Cliffe KA, Spence A, Tavener SJ: The numerical analysis of bifurcation problems with application to fluid mechanics. Acta Numer. 2000, 2000(9):39–131.

    Article  MathSciNet  Google Scholar 

  33. Dijkstra HA, Wubs FW, Cliffe KA, Doedel E, Dragomirescu IF, Eckhardt B, Gelfgat AY, Hazel A, Lucarini V, Salinger AG, Phipps ET, Sanchez-Umbria J, Schuttelaars H, Tuckerman LS, Thiele U: Numerical bifurcation methods and their application to fluid dynamics: analysis beyond simulation. Commun. Comput. Phys. 2014, 15(1):1–45.

    MathSciNet  Google Scholar 

  34. Bär M, Bangia AK, Kevrekidis IG: Bifurcation and stability analysis of rotating chemical spirals in circular domains: boundary-induced meandering and stabilization. Phys. Rev. E 2003., 67(5): Article ID 056126 Article ID 056126

  35. Schneider TM, Gibson JF, Burke J: Snakes and ladders: localized solutions of plane couette flow. Phys. Rev. Lett. 2010., 104: Article ID 104501 Article ID 104501

    Google Scholar 

  36. Lord G, Thümmler V: Computing stochastic traveling waves. SIAM J. Sci. Comput. 2012, 34(1):B24-B43.

    Article  MATH  Google Scholar 

  37. Coombes S, Schmidt H, Laing C, Svanstedt N, Wyller J: Waves in random neural media. Discrete Contin. Dyn. Syst. 2012, 32: 2951–2970.

    Article  MATH  MathSciNet  Google Scholar 

  38. Coombes S, Owen MR: Evans functions for integral neural field equations with heaviside firing rate function. SIAM J. Appl. Dyn. Syst. 2004, 3(4):574–600.

    Article  MATH  MathSciNet  Google Scholar 

  39. Pinto D, Ermentrout G: Spatially structured activity in synaptically coupled neuronal networks: I. traveling fronts and pulses. SIAM J. Appl. Math. 2001, 62(1):206–225.

    Article  MATH  MathSciNet  Google Scholar 

  40. Curtu R, Ermentrout B: Pattern formation in a network of excitatory and inhibitory cells with adaptation. SIAM J. Appl. Dyn. Syst. 2004, 3(3):191–231.

    Article  MATH  MathSciNet  Google Scholar 

  41. Quarteroni A, Sacco R, Saleri F Texts in Applied Mathematics. In Numerical Mathematics. Springer, Berlin; 2007.

    Google Scholar 

  42. Saad Y, Schultz MH: GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Stat. Comput. 1986, 7(3):856–869.

    Article  MATH  MathSciNet  Google Scholar 

  43. Rankin J, Avitabile D, Baladron J, Faye G, Lloyd D: Continuation of localized coherent structures in nonlocal neural field equations. SIAM J. Sci. Comput. 2014, 36(1):B70-B93.

    Article  MATH  MathSciNet  Google Scholar 

  44. Bressloff PC, Kilpatrick ZP: Two-dimensional bumps in piecewise smooth neural fields with synaptic depression. SIAM J. Appl. Math. 2011, 71(2):379–408.

    Article  MATH  MathSciNet  Google Scholar 

  45. Folias SE, Bressloff PC: Breathing pulses in an excitatory neural network. SIAM J. Appl. Dyn. Syst. 2004, 3(3):378–407.

    Article  MATH  MathSciNet  Google Scholar 

  46. Owen M, Laing C, Coombes S: Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J. Phys. 2007, 9: 378.

    Article  Google Scholar 

  47. Coombes S, Venkov N, Shiau L, Bojak I, Liley D, Laing C: Modeling electrocortical activity through improved local approximations of integral neural field equations. Phys. Rev. E 2007., 76(5): Article ID 051901 Article ID 051901

  48. Laing CR: Spiral waves in nonlocal equations. SIAM J. Appl. Dyn. Syst. 2005, 4(3):588–606.

    Article  MATH  MathSciNet  Google Scholar 

  49. Huang X, Troy WC, Yang Q, Ma H, Laing CR, Schiff SJ, Wu J-Y: Spiral waves in disinhibited mammalian neocortex. J. Neurosci. 2004, 24(44):9897–9902.

    Article  Google Scholar 

  50. Kilpatrick ZP, Bressloff PC: Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression. J. Comput. Neurosci. 2010, 28(2):193–209.

    Article  MathSciNet  Google Scholar 

  51. Laing CR: Pde methods for two-dimensional neural fields. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.

    Google Scholar 

  52. Coombes S, Laing C: Delays in activity-based neural networks. Philos. Trans. R. Soc., Math. Phys. Eng. Sci. 2009, 367(1891):1117–1129.

    Article  MATH  MathSciNet  Google Scholar 

  53. Laing CR, Longtin A: Dynamics of deterministic and stochastic paired excitatory-inhibitory delayed feedback. Neural Comput. 2003, 15(12):2779–2822.

    Article  MATH  Google Scholar 

  54. Meijer H, Coombes S: Travelling waves in models of neural tissue: from localised structures to periodic waves. EPJ Nonlinear Biomed. Phys. 2014, 2(1):3.

    Article  Google Scholar 

  55. Meijer HG, Coombes S: Travelling waves in a neural field model with refractoriness. J. Math. Biol. 2014, 68(5):1249–1268.

    Article  MATH  MathSciNet  Google Scholar 

  56. Faye G, Faugeras O: Some theoretical and numerical results for delayed neural field equations. Phys. D, Nonlinear Phenom. 2010, 239(9):561–578.

    Article  MATH  MathSciNet  Google Scholar 

  57. Szalai R: Knut: A continuation and bifurcation software for delay-differential equations. [http://gitorious.org/knut/pages/Home] Szalai R: Knut: A continuation and bifurcation software for delay-differential equations. [http://gitorious.org/knut/pages/Home]

  58. Guckenheimer J, Holmes P 42. In Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer, New York; 1983.

    Chapter  Google Scholar 

  59. Kuznetsov YA 112. In Elements of Applied Bifurcation Theory. Springer, Berlin; 1998.

    Google Scholar 

  60. Wiggins S: Introduction to Applied Nonlinear Dynamical Systems and Chaos. Springer, Berlin; 1990.

    Book  MATH  Google Scholar 

  61. Elvin A, Laing C, McLachlan R, Roberts M: Exploiting the Hamiltonian structure of a neural field model. Phys. D, Nonlinear Phenom. 2010, 239(9):537–546.

    Article  MATH  MathSciNet  Google Scholar 

  62. Coombes S, Lord GJ, Owen MR: Waves and bumps in neuronal networks with axo-dendritic synaptic interactions. Phys. D, Nonlinear Phenom. 2003, 178(3):219–241.

    Article  MATH  MathSciNet  Google Scholar 

  63. Champneys AR, Kuznetsov YA, Sandstede B: A numerical toolbox for homoclinic bifurcation analysis. Int. J. Bifurc. Chaos 1996, 6(05):867–887.

    Article  MATH  MathSciNet  Google Scholar 

  64. Guo Y, Chow CC: Existence and stability of standing pulses in neural networks: I. existence. SIAM J. Appl. Dyn. Syst. 2005, 4(2):217–248.

    Article  MATH  MathSciNet  Google Scholar 

  65. Blomquist P, Wyller J, Einevoll GT: Localized activity patterns in two-population neuronal networks. Phys. D, Nonlinear Phenom. 2005, 206(3):180–212.

    Article  MATH  MathSciNet  Google Scholar 

  66. Coombes S, Schmidt H, Avitabile D: Spots: breathing, drifting and scattering in a neural field model. In Neural Fields: Theory and Applications. Edited by: Coombes S, beim Graben P, Potthast R, Wright J. Springer, Berlin; 2014.

    Chapter  Google Scholar 

  67. Griewank A, Reddien G: The calculation of Hopf points by a direct method. IMA J. Numer. Anal. 1983, 3(3):295–303.

    Article  MATH  MathSciNet  Google Scholar 

  68. Wasylenko TM, Cisternas JE, Laing CR, Kevrekidis IG: Bifurcations of lurching waves in a thalamic neuronal network. Biol. Cybern. 2010, 103(6):447–462.

    Article  MathSciNet  Google Scholar 

  69. Shiau L, Laing CR: Periodically forced piecewise-linear adaptive exponential integrate-and-fire neuron. Int. J. Bifurc. Chaos 2013., 23(10): Article ID 1350171 Article ID 1350171

  70. Laing CR, Coombes S: Mode locking in a periodically forced “ghostbursting” neuron model. Int. J. Bifurc. Chaos 2005, 15(04):1433–1444.

    Article  MATH  MathSciNet  Google Scholar 

  71. Coombes S, Laing C: Pulsating fronts in periodically modulated neural field models. Phys. Rev. E 2011., 83(1): Article ID 011912 Article ID 011912

  72. Schmidt H, Hutt A, Schimansky-Geier L: Wave fronts in inhomogeneous neural field models. Phys. D, Nonlinear Phenom. 2009, 238(14):1101–1112.

    Article  MATH  MathSciNet  Google Scholar 

  73. Doedel E, Keller HB, Kernevez JP: Numerical analysis and control of bifurcation problems (ii): bifurcation in infinite dimensions. Int. J. Bifurc. Chaos 1991, 1(04):745–772.

    Article  MATH  MathSciNet  Google Scholar 

  74. Hastings SP: Single and multiple pulse waves for the Fitzhugh–Nagumo. SIAM J. Appl. Math. 1982, 42(2):247–260.

    Article  MATH  MathSciNet  Google Scholar 

  75. Sánchez J, Net M: On the multiple shooting continuation of periodic orbits by Newton–Krylov methods. Int. J. Bifurc. Chaos 2010, 20(01):43–61.

    Article  MATH  Google Scholar 

  76. Tuckerman LS, Barkley D: Bifurcation analysis for timesteppers. The IMA Volumes in Mathematics and Its Applications 119. In Numerical Methods for Bifurcation Problems and Large-Scale Dynamical Systems. Edited by: Doedel E, Tuckerman LS. Springer, New York; 2000:453–466.

    Chapter  Google Scholar 

  77. Kevrekidis IG, Samaey G: Equation-free multiscale computation: algorithms and applications. Annu. Rev. Phys. Chem. 2009, 60(1):321–344.

    Article  Google Scholar 

  78. Kevrekidis Y, Samaey G: Equation-free modeling. Scholarpedia 2010, 5(9):4847.

    Article  Google Scholar 

  79. Theodoropoulos C, Qian Y-H, Kevrekidis IG: “Coarse” stability and bifurcation analysis using time-steppers: a reaction-diffusion example. Proc. Natl. Acad. Sci. USA 2000, 97(18):9840–9843.

    Article  MATH  Google Scholar 

  80. Laing CR: On the application of “equation-free modelling” to neural systems. J. Comput. Neurosci. 2006, 20(1):5–23.

    Article  MATH  MathSciNet  Google Scholar 

  81. Laing C, Frewen T, Kevrekidis I: Coarse-grained dynamics of an activity bump in a neural field model. Nonlinearity 2007, 20(9):2127.

    Article  MATH  MathSciNet  Google Scholar 

  82. Laing CR, Frewen T, Kevrekidis IG: Reduced models for binocular rivalry. J. Comput. Neurosci. 2010, 28(3):459–476.

    Article  MathSciNet  Google Scholar 

  83. Zou Y, Fonoberov VA, Fonoberova M, Mezic I, Kevrekidis IG: Model reduction for agent-based social simulation: coarse-graining a civil violence model. Phys. Rev. E 2012., 85(6): Article ID 066106 Article ID 066106

  84. Sieber J, Gonzalez-Buelga A, Neild SA, Wagg DJ, Krauskopf B: Experimental continuation of periodic orbits through a fold. Phys. Rev. Lett. 2008., 100: Article ID 244101 Article ID 244101

    Google Scholar 

Download references

Acknowledgements

I thank Stephen Coombes, Kyle Wedgwood, Daniele Avitabile and the referees for useful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlo R Laing.

Additional information

Competing Interests

The author declares that he has no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Laing, C.R. Numerical Bifurcation Theory for High-Dimensional Neural Models. J. Math. Neurosc. 4, 13 (2014). https://doi.org/10.1186/2190-8567-4-13

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-4-13

Keywords