Neural field models describe the coarse-grained activity of populations of interacting neurons. Because of the laminar structure of real cortical tissue they are often studied in two spatial dimensions, where they are well known to generate rich patterns of spatiotemporal activity. Such patterns have been interpreted in a variety of contexts ranging from the understanding of visual hallucinations to the generation of electroencephalographic signals. Typical patterns include localized solutions in the form of traveling spots, as well as intricate labyrinthine structures. These patterns are naturally defined by the interface between low and high states of neural activity. Here we derive the equations of motion for such interfaces and show, for a Heaviside firing rate, that the normal velocity of an interface is given in terms of a non-local Biot-Savart type interaction over the boundaries of the high activity regions. This exact, but dimensionally reduced, system of equations is solved numerically and shown to be in excellent agreement with the full nonlinear integral equation defining the neural field. We develop a linear stability analysis for the interface dynamics that allows us to understand the mechanisms of pattern formation that arise from instabilities of spots, rings, stripes and fronts. We further show how to analyze neural field models with linear adaptation currents, and determine the conditions for the dynamic instability of spots that can give rise to breathers and traveling waves.
The functional organization of cortex appears to be roughly columnar, with the laminar sub-structure of each column organizing its micro-circuitry. These columns tessellate the two-dimensional cortical sheet with high density, e.g., 2,000 cm2 of human cortex contain 105 to 106 macrocolumns, comprising about 105 neurons each. Neural field models describe the mean activity of such columns by approximating the cortical sheet as a continuous excitable medium. They can generate rich patterns of emergent spatiotemporal activity and have been used to understand visual hallucinations, mechanisms for short term working memory, motion perception, the generation of electroencephalographic signals and many other neural phenomena. We refer the reader to [1,2] for recent discussions of neural field models and their uses, and in particular to the work of Bressloff and colleagues [3-5] and Owen et al.  for results on planar systems. A minimal two-dimensional neural field model can be written as an integro-differential equation of the form
where and . Here the variable u represents synaptic activity and the kernel w represents anatomical connectivity. The nonlinear function H represents the firing rate of the tissue and will be taken to be a Heaviside so that the parameter h is interpreted as a firing threshold. For the case of a symmetric synaptic kernel , the model also has a Liapunov function [6,7] given by
which can be useful in determining the stability of equilibrium solutions.
Neural field models support traveling waves that underlie EEG signals; but also spots of localized high firing activity, which have been linked to models of working memory. These spots can become unstable and can pattern cortex with intricate structures. In Figure 1A we show results of a direct numerical simulation with a classic Mexican-hat choice for w. For further details see the discussion around Equation (20) and Section A.1 in the Appendix (for the numerical scheme). Here Equation (1) describes a single population model with short-range excitation and long-range inhibition. This minimal example nicely illustrates the ability of neural field models to generate intricate spreading labyrinthine patterns. We do not expect to find labyrinthine patterns as such in real brain activity. However, they provide a convenient (and visually striking) proxy for the generation of complex patterns of activity, that emerge spontaneously and/or can be evoked, for example in visual cortex . Labyrinthine patterns are also seen when the Heaviside firing rate function is replaced by a steep sigmoid, as will be discussed later. Visual inspection suggests that much of the behavior of such patterns can be described simply by tracking the boundary between high and low states of activity. Indeed this appears to resonate with neuroscientific practice, where changes of brain activity are often of greater interest than the current brain state per se. Hence it is of interest whether the dynamics of (1) can be replaced by a lower dimensional description that evolves the boundary between high and low states of activity. This programme has already been developed by Amari in his seminal article on one-dimensional models , where this interface reduces naturally to a point (or a set of points). However, in two spatial dimensions the interface is more naturally a closed curve (or a set of closed curves).
Fig. 1. Labyrinthine structure emerging from (1) and (20) with parameters , and Heaviside threshold . The initial spot of radius has a mode four instability, cf. Figure 4. This is primed by perturbing R with . Rows A show u and the colorbar below indicates its values. Rows B illustrate the evolution of the interface (, golden outline) due to the normal velocity of the boundary (green arrows, to scale but enlarged by a factor 50). The Liapunov function of (2) is noted at all eight time steps. See also the video in Additional file 1.
The main topic of this article is the development of an equivalent interface description for neural field models of the type exemplified by (1). We show that activity patterns can be described by dynamical equations of reduced dimension, and that these depend only on the shape of the interface (requiring no knowledge of activity away from the interface). Not only is this description amenable to fast numerical simulation strategies, it allows for the construction of localized states and an analysis of their linear stability. Given the computational overheads in simulating the full neural field model this enhances our ability to study pattern formation and suggests more generally that modeling the interfaces of patterns, rather than the patterns themselves, may lead to novel, efficient descriptions of brain activity. Indeed the use of interface dynamics to analyze patterns that arise in partial differential equation models of chemical and physical systems has a strong history , and it is natural to translate some of the ideas and technologies from these studies to non-local neural field models. The work by Goldstein [12,13] and Muratov  on pattern formation in two-dimensional excitable reaction-diffusion systems is especially relevant in this context, as both authors have developed effective descriptions of interface dynamics in terms of non-local interactions. See also the book by Desai and Kapral  for a recent overview.
It is worth pointing out that whether computing interface dynamics can compete with other numerical schemes will depend on the problem at hand. In general, boundaries that remain relatively short and do not pinch guarantee a speed advantage. In practice, we expect this approach to be especially relevant for (semi-) analytical work aiming at qualitative understanding, as illustrated by some of the examples presented in this article.
In Section 2 we present some of the key ideas behind an interface dynamics in the setting of a one-dimensional neural field model. This is particularly useful for introducing the definition of normal velocity from a level-set condition, as well as establishing what it means for an interface to be linearly stable. The extension of these ideas to two-dimensional systems is presented in Section 3. By writing the synaptic connectivity in terms of a linear combination of Bessel functions, we show that dynamics for the interface can be constructed in terms of line-integrals along the interface, and that the normal velocity of the interface is driven by Biot-Savart-style interactions. Thus we obtain a reduced description for the evolution of a pattern boundary solely in terms of quantities on the boundary itself. Numerical simulations of the interface dynamics are shown to be in direct correspondence with those of the full neural field model. The notion of linear stability of stationary solutions in the interface framework is fleshed out in a series of examples (for spots, rings, stripes and fronts) in Sections 4 and 5, and allows us to understand some of the mechanisms for pattern formation. In Section 6 we add linear adaptation to (1) and extend our analysis to cover this important neural phenomenon. This can introduce dynamic instabilities of stationary structures, and we calculate where breathing and drift instabilities for localized spots occur. Moreover, we use a perturbation argument to determine the shape of traveling spots that emerge beyond a drift instability and show that spots contract in the direction of propagation and widen in the orthogonal direction. Finally, in Section 7 we discuss extensions of the work in this article.
2 A one-dimensional primer
Before we develop the machinery for describing the evolution of interfaces in two-dimensional neural field models, it is informative to begin with a discussion in one dimension. In this case a minimal model can be written in the form
where and , . For a symmetric choice of synaptic kernel , which decays exponentially, the one-dimensional model (3) is known to support a traveling front solution [16,17] that connects a high activity state to a low activity state. In this case it is natural to define a pattern boundary as the interface between these two states. Thus we can define a moving interface (level set) according to
Here we are assuming that there is only one point on the interface, though in principle we could consider a set of points. The function gives the evolution of the interface. Since the high and low activity states in the neural field model are naturally distinguished by determining whether u is above or below the firing threshold, we shall take the constant on the right hand side of (4) to be h (though other choices are also possible). Differentiation of (4) gives an exact expression for the velocity of the interface in the form
We can now describe the properties of a front solution solely in terms of the behavior at the front edge which separates high activity from low. To see this, let us assume that the front is such that for and for . Then (3) reduces to
Integrating (7) from −∞ to t (and dropping transients) yields
Hence from (5) the speed of the front is given implicitly by the equation
To determine stability of the traveling wave we consider a perturbation of the interface and an associated perturbation of u. Introducing the notation to denote perturbed quantities, to a first approximation we will set , and write . The perturbation in u can be related to the perturbation in the interface by noting that both the perturbed and unperturbed boundaries are defined by the level set condition, so that . Introducing , we thus have the condition that for all t. Integrating (6) and dropping transients gives
Given this preliminary exposition of interface dynamics we are now ready to describe the extension to two dimensions and to address the additional challenges that working in the plane gives rise to.
3 Interface dynamics in two dimensions
As in the one-dimensional case we will define pattern boundaries as the interface between low and high states of neural activity. To be more precise we introduce the notation to denote the (compact) area of activity where . The boundary, or interface, is defined by the threshold crossing condition . In this case the model defined by (1) takes the form
and the Liapunov function can be written simply as
Note that does not have to be simply connected and can describe a union of many disjoint active regions. However, for clarity of exposition we shall focus on describing the evolution of an interface that is a single closed curve, as depicted in Figure 2A. The extension to multiple closed curves is straight-forward.
Fig. 2. Interface parametrization and Mexican-hat shape for synaptic kernel. A: Compact area and boundary . Two points on the boundary r, parametrized by s and , are shown with their normal n and tangent t vectors. B: Mexican hat (20) with parameters and . White contours indicate values <0, black ones ≥0, for a step size of 0.005.
It is well known that the two-dimensional model (17) can support localized states such as spots and rings [18,19] for a Mexican-hat synaptic connectivity. Recent work in  has shown how to determine the stability of such solutions to angular perturbations using an Evans function approach . An analogous numerical study for smooth sigmoidal firing rates can be found in . These studies have highlighted, as compared to the one-dimensional model, that an extra spatial dimension can lead to azimuthal instabilities, whereby localized states can deform (or even split) into patterns with a reduced symmetry predicted by the shape of the most unstable eigenmode. Direct numerical simulations beyond such instability points have further shown the emergence of intricate spreading labyrinthine patterns like those in Figure 1, that leave behind a stable patterned state in their wake. It is our intention here to recover the Evans function results for stability, albeit using a purely interface description of dynamics, as well as to determine the nonlinear equations of motion that govern the evolution of labyrinthine (and other) structures. Moreover, by employing a representation of the synaptic connectivity in terms of a linear combination of Bessel functions, we can obtain an exact, though spatially reduced, dynamical system to describe the interface that depends solely on the shape of the interface itself. In the following, we consider kernels of the form 
which is shown for and in Figure 2B.
From the form of (22), (23), and (24), we see that the evolution of the interface does not require any knowledge of the neural field away from the contour, and rather just depends on the shape of the sets where the field is above threshold. We now exploit the choice of as basis function for constructing the synaptic kernel to show how the double integrals in (23) and (24) can be reduced to line integrals. This yields an elegant description of the interface dynamics that emphasizes how the geometry of drives the evolution of spatiotemporal patterns. The key step in this reformulation is the use of Green’s identity. For a two-dimensional vector field F this identity is the two-dimensional version of the divergence theorem, which we write symbolically as . Using this first identity we may generate a second for a scalar field Ψ as .
To evaluate the right hand side of (23) and (24) it is enough to calculate and its gradient. In fact, this latter term can easily be rewritten as a line integral, using the second Green’s identity, for any choice of synaptic kernel
Note that the choice of as a basis for w is merely a convenience to allow explicit calculations. As long as we can write the connectivity function w as the divergence of a vector field then we can exploit Green’s first identity to turn the right hand side of (23) into a line integral.
From the Biot-Savart form of (29) we see that for every part i of the synaptic kernel there is an effective repulsion between two arc length positions with anti-parallel tangent vectors, although the combined effect when including all N terms will depend on the choice of the amplitudes . Now with (22), (27), and (28) the normal velocity on the interface can be written solely in terms of certain line-integrals around the interface. From a computational perspective this leads to a substantial advantage in that one no longer needs to solve the full non-local neural field model (17) across the entire plane, and can instead simply evolve the interface in time by discretizing the boundary and translating the points with the normal velocity from (22) in the direction of n. One possible practical disadvantage of this is the need to monitor for possible self-intersections of the evolving boundary, splitting, where a connected region pinches off into two or more disconnected regions, or indeed the creation of new boundaries where none existed before. However, numerical schemes for coping with similar situations in fluid models are well developed in the literature and it is natural to turn to these for more refined numerical schemes and ones that can automate the process of contour surgery [22,23]. In Figure 1B we illustrate the simple numerical implementation of the interface dynamics described in Section A.2 in the Appendix, showing the effectiveness of the dimensionally reduced system at capturing the spatiotemporal pattern formation of the full model shown in Figure 1A.
Furthermore, in our calculations we have found that the key assumption of a Heaviside firing rate can be relaxed to a degree without fundamentally changing the results. This is illustrated in Figure 3, where we show the evolution in time of the interface and the corresponding Liapunov function. The evolution with a Heaviside firing rate is shown in red, and compared with simulations of the full neural field model using more biologically realistic sigmoids , with in green and in blue. Here σ reflects the expected width of the distribution of firing thresholds around a mean h in the neural population, with the Heaviside case corresponding to . Figure 3 demonstrates that for these steep sigmoids very similar labyrinthine shapes arise, and closer inspection reveals that the main differences occur at the rapidly developing rim of the structure, whereas the settled interior is nearly identical. Thus a simple adjustment of the time constant α will in this case provide a near perfect match of the emerging structures. In Figure 3 we demonstrate this with the dashed and dotted red lines, which represent the Heaviside Liapunov function computed over longer time scales (up to , respectively) and then scaled down to by adjusting α. A very close match to the sigmoidal Liapunov curves (green and blue lines) is then obtained. However, for broader sigmoids we find labyrinths still resembling the Heaviside one, but with more obvious spatial changes. The video in Additional file 3 shows the case as an example. It would seem that mild deviations in the shape of the firing rate from Heaviside (to a steep sigmoidal form) are reflected more in temporal speed than in spatial shape changes.
Fig. 3. Time dependence of the interface and Liapunov function. Red curves are for the Heaviside model of Figure 1, green and blue ones use a sigmoid instead. Circular labels indicate the times of the snapshots. Dashed and dotted curves scale the Heaviside one by adjusting α. See also the video in Additional file 2.
To obtain (30) we have used the fact that
As well as providing a computationally useful framework for studying pattern formation, the interface dynamics including its Liapunov function is also amenable to a direct linear stability analysis. This is especially useful for understanding how the instability of localized stationary states can seed interesting structures, like the labyrinths of Figures 1 and 3. Stationarity of a solution means that the normal velocity is zero all along the boundary of the active area. This is equivalent to demanding on the boundary. In this case (22) reduces to
where r is on the boundary parametrized by s. We use the notation to denote a stationary active region. Given the stationary interface, we can also calculate the stationary field u everywhere (away from the interface) using (17) as
which can also be evaluated as a line integral. In order to analyze the stability of stationary solutions in the original neural field formalism defined by (1) one would perturb the field variable u and linearize to derive an eigenvalue equation or Evans function . Here we determine stability using the interface dynamics, generalizing the approach described in Section 2.
Using the notation again to denote perturbed quantities, we consider small perturbations to the contour shape and denote the new interface by . The relationship between the perturbed interface and the perturbed field is, as in one dimension, determined by the condition , where
The dynamics for is given by (23) with replaced by . The perturbation affects the normal vector as well as the displacement vector that occurs in (27). Thus to evaluate (35) it is necessary to linearize about the unperturbed contour. In the case of interfaces without curvature the linear contribution to is zero. In contrast for curved interfaces an addition theorem for Bessel functions shows that there is a non-zero contribution. To clarify this statement and show how the above machinery is used in practice, we now give some explicit examples of localized solutions and their stability.
4 Localized states: spots
We consider spots to be circular stationary solutions. They are the equivalent of the bumps known in one spatial dimension . For radially symmetric kernels we expect stationary circular solutions. Yet in two spatial dimensions they can undergo azimuthal instabilities, as already found in . In order to obtain circular solutions we use the standard parametrization of a circle for the contour and write
Hence the right hand side of (33) can be calculated using
where . Using Graf’s formula  to perform the integration in (37) we obtain an implicit equation for the spot radius R in the form
where is the modified Bessel function of the first kind of order ν. A plot of the spot radius R as a function of threshold h is shown in Figure 4.
Fig. 4. Stationary radial solutions for the Mexican hat kernel (20) with and various values of γ. The dotted branches are circular solutions unstable to uniform changes of size. Solid branches are stable. Dashed branches indicate azimuthal instabilities of different modes m which deform the circular solution.
For a circular solution of radius Rψ is conveniently written as
Here ψ may be constructed explicitly (off the boundary), using similar line integral calculations to those for existence (above), and is given by
Note that since is real . A mode-m instability will occur if , which recovers the result in  obtained using an Evans function approach. The possibility of such azimuthal instabilities is indicated on the solution branches shown in Figure 4 (and we would expect the emergence of solution branches with symmetry from the points marked by m). Interestingly we can see from (44) and (45) that the mode with is neutrally stable. For a perturbation to a circular boundary of the form , and , the perturbation of the normal velocity is
To calculate the Liapunov function for an unperturbed spot we evaluate (30) using
5 Rings, fronts and stripes
In this section we show how to treat other simple interface shapes, namely rings, fronts and stripes, and determine their stability. We recover previous results in  for rings (obtained with an Evans function method), whilst calculations for the other structures are shown to be straight-forward using the interface dynamics approach.
Rings can be considered as the difference of two spots, one with radius and the other with radius . Introducing , where is given by the right hand side of (41), we have that . Enforcing the threshold conditions gives a pair of equations that determine . To establish stability the outer contour is perturbed exactly as in the previous section: , for some small amplitude a. For the inner contour we similarly write . We now generate on each of the two boundaries and equate these to zero to generate two equations for the pair of unknown amplitudes . Demanding that this pair of equations has a non-trivial solution generates an equation for λ in the form where and
At a bifurcation point defined by we expect a ring to split into m spots. In Figure 5 we plot solution branches for ring solutions as a function of h for the Mexican-hat model defined by (20), and flag the types of instability that can occur. Of the two solution branches the lower one is unstable with respect to radial perturbations, whereas the upper branch is subject to azimuthal destabilizations. In Figure 6 we show a two-dimensional plot of an unstable ring solution, and the emergent structure of five bumps seen beyond instability, consistent with the predictions of our linear stability analysis.
Fig. 5. Existence and stability of ring solutions with inner radius , with the indicated γ and in (20). The lower branch is always unstable (dotted lines). On the upper branch the stable ring (solid lines) can lose stability with dominant (solid circles), 3 (triangles), 4 (squares), etc. for decreasing h.
Calculating stationary planar fronts is straightforward, since the normal vector is orthogonal to the displacement vector and the line integral on the right hand side of (33) is zero. Hence we have the existence condition and we note that h lies exactly halfway between the two possible steady states of u. To investigate the properties of a planar traveling front of speed c it is informative to treat the simple case . We then have that and
Hence using (22), the normal velocity is given by
For a modified Bessel function one has
Hence for the simple example above, the stationary planar front is stable due to . However, for a Mexican-hat function it is possible that for some band of wave numbers, and we would expect instabilities in this case. Figure 7 shows for a stationary front with the Mexican-hat function (20), from which the critical band of wave numbers can easily be read off.
Fig. 7. Spectra for the Mexican-hat function (20) with and : for a stationary front (blue line, ), and varicose (green line) and sinuous (red line) stripes of width , respectively.
An example of is shown in Figure 8 for a Mexican-hat function.
Fig. 8. Stripe widths D for different values of γ with a Mexican hat interaction given by (20) and .
for . When considering small perturbations there is some ambiguity in expanding (57) depending on the relative size of the perturbations on each boundary. However, this at most amounts to a sign difference, which means that we may expand (57) to obtain
The branch with corresponds to sinuous perturbations with and the branch with corresponds to varicose perturbations with . Since then sinuous instabilities dominate over varicose. Note that as we recover the existence and stability results for a stationary front as expected. Examples of sinuous and varicose instabilities (as predicted from our analysis) are shown in Figure 9.
Fig. 9. Direct numerical simulation of u for varicose (rows A) and sinuous (rows B) instabilities with of a stripe of width for parameters , and Heaviside threshold . See also the video in Additional file 5.
6 Neural field models with linear adaptation
In real cortical tissues there are an abundance of metabolic processes whose combined effect is to modulate neuronal response. It is convenient to think of these processes in terms of local feedback mechanisms that modulate synaptic currents. For example, it is known that a model of synaptic depression can destabilize a spot in favor of a traveling pulse . Here we consider a simple linear model of adaptation that is known to lead to instabilities of localized structures . In this case the original neural field model is modified according to
with . Here ψ is the second term on the right hand side of (3) and (1) in one and two dimensions, respectively. The linearity of the equations of motion means that we may obtain the trajectory for in closed form as
which may be rearranged to give
Here ψ may be constructed explicitly off the boundary, and is given by Equation (41), so that . A saddle-node bifurcation of stationary spots occurs at where . Hence, in the plane stationary solutions only exist for . Under variation in α we expect the emergence of a drifting spot. Beyond a drift instability, we expect to be able to find traveling spots that move in some direction c with constant speed . These can be constructed as stationary solutions in a co-moving frame , and satisfy
We may write the velocity in terms of local co-ordinates on the moving interface as , where is the normal velocity and the tangential velocity at a point on the interface. Taking the cross product of c and t (and using ) shows that . Hence, the condition for stationary propagation, with , is
In general this is a hard equation to solve in closed form. However, to obtain an estimate of the speed and shape of a spot beyond a point of instability it is enough to consider a weak distortion of a traveling circular wave . Choosing and writing , and assuming that ψ is rotationally symmetric means that we may construct a solution in the form
We note that the threshold condition for a circular spot () can only strictly be met for the case , since the right hand side of (74) depends on the plane polar angle through . For this case we may construct the equation to determine the eigenvalues that occur in perturbations of the form as solutions to , where
The eigenvalues for are determined by , which has two solutions: and . Hence this mode becomes unstable as g increase through . It is also possible that a breathing instability may arise for the mode with . Note that another way to generate breathing solutions is to include localized inputs [3,4], breaking the homogeneous structure of the network. Substitution of into (75) gives the condition for this instability as:
with emergent frequency . For splitting instabilities can be determined by setting in (75) to give the conditions . An example of a breather arising as an instability of a spot is shown in Figure 10 (and numerical simulations confirm the predicted value of the emergent frequency around the bifurcation point).
Fig. 10. Breathing instability from direct numerical simulation with parameters , , , coupling strength and Heaviside threshold . Initial data was constructed from a stationary spot solution by modifying the adaptation variable to a top-hat shape. Top and bottom rows: snapshots of activation u and adaptation a, respectively, at times indicated by orange lines in the middle row. Middle row: above threshold radii as function of time. See also the video in Additional file 6.
which we then inverse transform to obtain
At the point where , the shape of the spot deviates from circular with an amplitude that depends on quadratic and higher powers of c. Thus not only is there a breathing bifurcation at , but also a drifting instability to a traveling spot whose shape, determined from (79) by , can be written in the form with
For and dropping terms of in (79) we see that there are solutions to of the form , where . The amplitudes of higher order modes may be constructed in a similar fashion, i.e., by balancing terms at each order in c in using (79) and (80). However, it is not our intention to pursue these lengthy calculations here. Rather to give a feel of the shape of a traveling spot we plot the level set where using (79) in Figure 11D including terms up to . This nicely illustrates that spots contract in the direction of propagation and widen in the orthogonal direction, and provides a theoretical explanation for the shape of traveling spots recently reported in . With the aid of direct numerical simulations we have also explored the scattering properties of traveling spots. In common with previous numerical studies of planar neural fields with some form of adaptation, we find that such structures can behave as quasi-particles in the sense that they can scatter like dissipative solitons . An example of such scattering is shown in Figure 11. Here we see a repulsive interaction which repels the spots away from each other if they approach too closely.
Fig. 11. Collision of two pulses from direct numerical simulation with parameters as in Figure 10. Columns A and C show u and a, respectively. Column B shows the interfaces, velocities (arrows to scale but enlarged by a factor of four), and dotted centerlines as visual guides. See also the video in Additional file 7. Column D predicts to the shape of a traveling pulse for different c.
In this article we have formulated an interface dynamics for planar neural fields with a Heaviside firing rate. This has allowed us to (i) develop an economical computational framework for the evolution of spatiotemporal patterns, and (ii) perform linear stability analyses of localized structures. For simplicity we have focused on single population models. However, the extension to population models that treat the dynamics of both excitatory and inhibitory populations is straightforward. Perhaps a more interesting extension is to consider neural field models that incorporate feature selectivity such as that observed in visual cortex for orientation , spatial frequency  and texture . Denoting this feature label by χ then all of these models are expressed in terms of some non-local integro-differential equation for . We note that the notion of an interface is still well defined and that the level set condition gives a constraint between local geometrical data and features. As an alternative to simulating the neural field models an interface approach (incorporating feature space) may be more useful for understanding how local data can be integrated into global geometrical structures, as advocated in the neurogeometry framework of Petitot  (say for understanding models of contour completion in models of primary visual cortex where the feature space is orientation). The extension of this work to treat sigmoidal firing rates remains an open challenge. However, recent techniques for dealing with a certain class of firing rate functions in one spatial dimension, which includes smooth firing rate functions connecting zero to one, are likely to be useful in this regard . We have included an adaptive current in the standard Amari model here, but it would be informative to develop interface treatments for other forms of modulation, e.g., arising from threshold accommodation  or synaptic depression , as well as the inclusion of axonal delays . These models can readily support spiral wave activity, and it would be interesting to see if an interface description, possibly adapting techniques by Hagberg and Meron , could shed light on their properties. Another possible extension of the work in this article, motivated by our numerical results for scattering spots, is to develop an interface theory of quasi-particle interactions along the lines for reaction-diffusion models described in [38,39], using ideas developed by Bressloff  and Venkov  for weakly interacting systems in one spatial dimension. All of the above are topics of ongoing research and will be reported upon elsewhere.
Appendix: Numerical schemes
A.1 Fourier technique for neural field evolution
Because of its non-local character, the model described by (1), or its extension (61), is challenging to solve with conventional numerical methods. However, exploiting the convolution structure of (1) allows one to write the Fourier transform of as a product. Here and can be taken either as a Heaviside or a more general sigmoidal form. Introducing a spectral wave-vector k then this product is simply , where functions with arguments k denote two-dimensional spatial Fourier transforms. We may evaluate directly, at every time step, using fast Fourier transforms (FFTs). Note that can be pre-computed, by FFT or here even analytically, so that the procedure iterated over time amounts to computing by FFT, followed by a (complex) multiplication with , and finally an inverse FFT to obtain the result of the integral. We wish to employ a parallel compute cluster for rapid computation over large grids, and hence use the free software package FFTW 3.3 , which includes a parallel MPI-C version. Note that the use of Fourier methods implies that the discretization grid has periodic boundaries, or in other words, the solution is effectively computed on a torus. We use a grid spacing of about 0.03 or better in our computations here.
In order to compute the time evolution, we use DOPRI5 , a well-known implementation of an explicit Dormand-Prince (Runge-Kutta) method of order 5(4) with step size control and dense output of the order 4. A version in C due to J. Colinge is available on the web thanks to E. Hairer. However, in our case we perform parallel computations, so we have adapted this code accordingly using MPI-C. In particular, we now consider the maximum error across all compute nodes and all variables, rather than the mean error over local variables, and communicate the resulting time step adaptation over the cluster to achieve a unified evolution of the entire distributed grid. Numerical tolerances are set to where represents all variables, i.e., u and potentially a at all grid points.
This numerical method is robust against effects of the underlying grid. This is due to the employed Fourier method, which performs the spatial convolution as a multiplication in Fourier space. The discrete Fourier transform used to transfer this calculation to Fourier space calculates a trigonometric interpolation polynomial, and the influence of the grid is effectively smoothed by implicit interpolation.
Computing an evolution as shown in Additional File 1 takes several hours on the 32 to 64 Infiniband-connected compute nodes we have typically employed, and yields many gigabytes of data. We note that computation with a sigmoidal firing rate instead of the Heaviside one is over an order of magnitude faster, reflecting the numerical difficulty of dealing with sharp edges.
A.2 Interface dynamics
Equations (22) and (28) can be used to develop a numerical scheme. The contour is discretized into a set of points, and the normal vectors and the displacement vectors are found by computing the orientation and distance between points. Hence the computation of the contour integrals in (28) is straight-forward and yields the normal velocity, cf. (22), which is used to displace the points of the contour in the normal direction at every time step. We employed a simple Euler method to calculate the dynamics of the contour. As the contour grows/shrinks, additional points have to be created/eliminated along the contour.
This method does not provide any means to deal with the splitting or emergence of contours. It is faster than the Fourier technique (see Section A.1 in the Appendix) for small contours, yet the time to compute the normal velocity is proportional to (N being the number of points discretizing the contour), as opposed to for the Fourier technique (where M is the number of grid points). Hence it becomes slower for larger contours due to the absence of suitable spectral methods to compute the line integrals. The main advantage of this method is the fact that no underlying grid has to be deployed across the specified domain.
Electronic Supplementary Material
Additional file 1. A movie of an evolving labyrinthine structure. Emergence of a labyrinthine structure in u as shown in snapshots in Figure 1. Panels A and B in the animation correspond to rows A and B of that figure, and displayed content, parameters and initial condition are discussed in its caption. The domain was discretized by a grid. Note that the Fourier technique used, see Section A.1 in the Appendix, implies periodicity and turns the domain effectively into a torus. (AVI 16.0 MB)
Additional file 2. A movie showing the similarity between Heaviside and steep sigmoidal models. In panel A the values of u from three direct numerical simulations are overlaid by assigning each one a part of the red-green-blue color space. The Heaviside model, cf. Figure 1 and the video in Additional file 1, determines red intensity, whereas calculations with a sigmoidal firing rate , cf. Figure 3, determine intensities in green for and blue for . Where all three models predict the same value for u, a gray color results as shown by the colorbar. Where the models predict different values, colored patches show up. In panel B the same data is displayed, but this time only the interface is plotted with the same color assignment. From about time onward the structure starts to interfere with itself across the grid edges. (AVI 18.0 MB)
Additional file 3. A movie showing pattern evolution for a shallow sigmoidal model. The same model as in Figure 1 but with a sigmoidal firing rate with . A comparison with the Heaviside model, Figure 1 and the video in Additional file 1, as well as the sigmoid model with smaller σ, Figure 3 and the video in Additional file 1, shows that broadening the sigmoid eventually leads to significant deviations from the Heaviside prediction. This illustrates the practical limits of the interface method proposed in this article. (AVI 17.0 MB)
Additional file 4. A movie of a fivefold spot pattern arising from a ring instability. Decay of a perturbed ring solution in u into five spots as shown in snapshots in Figure 6. Displayed content, parameters and initial condition are discussed in that figure’s caption. The domain was discretized by a grid. Note that the simulation time (displayed on top of the u plot) is nonlinearly related to the actual play time of the animation, in order to capture both the rapid structural change from ring to spots at the beginning and the slow drifting apart of the spots that follows. (AVI 12.0 MB)
Additional file 5. A movie showing fingers of instability. Varicose and sinuous instabilities of a stripe as shown in snapshots in Figure 9. Panels A and B in the animation correspond to rows A and B of that figure, and displayed content, parameters and initial condition are discussed in its caption. For the present set of parameters varicose instabilities occur for , whereas sinuous instabilities occur for . The domain size is set to in the ordinate to guarantee that perturbations are periodic. The domain size in the abscissa is and the domain was discretized by a grid. (AVI 10.0 MB)
Additional file 6. A movie showing a breather. Breathing spot as shown in snapshots in Figure 10. Panels A and B in the animation correspond to the top and bottom row of that figure, and displayed content and parameters are discussed in its caption. Both activation u and adaptation a start with radius above threshold, the former with a spot solution and the latter with a disc of constant value 0.25g. The spot oscillates with angular frequency , close to the theoretical prediction from linear stability analysis (, with increasing agreement as one approaches the bifurcation point from above). The actual domain was , of which only part is shown, and was discretized by a grid. (AVI 7.7 MB)
Additional file 7. A movie showing spot scattering. Collision of two travelling spots as shown in snapshots in Figure 11. Panels A-C in the animation correspond to rows A-C of that figure, and displayed content and parameters are discussed in its caption. Note that time in the figure corresponds to in the animation. To initially create the traveling spots, two spot solutions in u with radius 2.8 are used, with two co-located discs in a of the same radius. The discs have a linear gradient along the abscissa from zero to 0.5g, while having uniform values along the ordinate. The mean position of the regions of the pulses is kept track of during time evolution and used to estimate the current velocities. The domain was discretized by a grid. (AVI 17.0 MB)
The authors declare that they have no competing interests.
SC, HS and IB contributed equally. All authors read and approved the final manuscript.
Liley DTJ, Foster BL, Bojak I: Co-operative populations of neurons: mean field models of mesoscopic brain activity. In Computational Systems Neurobiology. Edited by Le Novère N. Dordrecht, Berlin; 2012:317-364.
SIAM J Appl Dyn Syst 2004, 3:378-407. Publisher Full Text
Phys Rev Lett 2005., 95(1-4):
Article ID 208107
New J Phys 2007., 9:
Article ID 378
Appl Math Lett 2004, 17:1047-1051. Publisher Full Text
Front Syst Neurosci 2010., 4:
Article ID 20
Phys Rev E 2002., 66(1-25):
Article ID 066108
Proc R Soc Edinb, Sect A, Math 1993, 123:461-478. Publisher Full Text
Biol Cybern 1999, 80:393-409. Publisher Full Text
SIAM J Appl Dyn Syst 2003, 2:487-516. Publisher Full Text
Annu Rev Fluid Mech 1992, 24:89-115. Publisher Full Text
J Comput Phys 1997, 135:220-226. Publisher Full Text
SIAM J Appl Math 2011, 71:379-408. Publisher Full Text
SIAM J Appl Math 2001, 62:206-225. Publisher Full Text
Coombes S, Owen MR: Exotic dynamics in a firing rate model of neural tissue with threshold accommodation. In Fluids and Waves: Recent Trends in Applied Analysis. Am. Math. Soc., Providence; 2007:123-144. [Contemp. Math. 440]
Philos Trans R Soc Lond B 2003, 358:1643-1667. Publisher Full Text
J Math Neurosci 2011., 1:
Article ID 4
Phys Rev Lett 2005., 94:
Article ID 148102
PLoS Comput Biol 2010., 6:
Article ID e1000653
Physica D 1998, 123:460-473. Publisher Full Text
Physica D 2001, 151:61-72. Publisher Full Text
SIAM J Appl Math 2005, 66:57-81. Publisher Full Text
Venkov NA: Dynamics of neural field models. PhD thesis. School of Mathematical Sciences, University of Nottingham; 2009. [http://www.umnaglava.org/pdfs.html]Venkov NA: Dynamics of neural field models. PhD thesis. School of Mathematical Sciences, University of Nottingham; 2009. [http://www.umnaglava.org/pdfs.html]