Skip to main content

Phase-Amplitude Descriptions of Neural Oscillator Models

Abstract

Phase oscillators are a common starting point for the reduced description of many single neuron models that exhibit a strongly attracting limit cycle. The framework for analysing such models in response to weak perturbations is now particularly well advanced, and has allowed for the development of a theory of weakly connected neural networks. However, the strong-attraction assumption may well not be the natural one for many neural oscillator models. For example, the popular conductance based Morris–Lecar model is known to respond to periodic pulsatile stimulation in a chaotic fashion that cannot be adequately described with a phase reduction. In this paper, we generalise the phase description that allows one to track the evolution of distance from the cycle as well as phase on cycle. We use a classical technique from the theory of ordinary differential equations that makes use of a moving coordinate system to analyse periodic orbits. The subsequent phase-amplitude description is shown to be very well suited to understanding the response of the oscillator to external stimuli (which are not necessarily weak). We consider a number of examples of neural oscillator models, ranging from planar through to high dimensional models, to illustrate the effectiveness of this approach in providing an improvement over the standard phase-reduction technique. As an explicit application of this phase-amplitude framework, we consider in some detail the response of a generic planar model where the strong-attraction assumption does not hold, and examine the response of the system to periodic pulsatile forcing. In addition, we explore how the presence of dynamical shear can lead to a chaotic response.

1 Introduction

One only has to look at the plethora of papers and books on the topic of phase oscillators in mathematical neuroscience to see the enormous impact that this tool from dynamical systems theory has had on the way we think about describing neurons and neural networks. Much of this work has its roots in the theory of ordinary differential equations (ODEs) and has been promoted for many years in the work of Winfree [1], Guckenheimer [2], Holmes [3], Kopell [4], Ermentrout [5] and Izhikevich [6] to name but a few. For a recent survey, we refer the reader to the book by Ermentrout and Terman [7]. At heart, the classic phase reduction approach recognises that if a high dimensional non-linear dynamical system (as a model of a neuron) exhibits a stable limit cycle attractor then trajectories near the cycle can be projected onto the cycle.

A natural phase variable is simply the time along the cycle (from some arbitrary origin) relative to the period of oscillation. The notion of phase can even be extended off the cycle using the concept of isochrons [1]. They provide global information about the ‘latent phase’, namely the phase that will be asymptotically returned to for a trajectory with initial data within the basin of attraction of an exponentially stable periodic orbit. More technically, isochrons can be viewed as the leaves of the invariant foliation of the stable manifold of a periodic orbit [8]. In rotating frame coordinates given by phase and the leaf of the isochron foliation, the system has a skew-product structure, i.e. the equation of the phase decouples. However, it is a major challenge to find the isochron foliation, and since it relies on the knowledge of the limit cycle it can only be found in special cases or numerically. There are now a number of complementary techniques that tackle this latter challenge, and in particular we refer the reader to work of Guillamon and Huguet [9] (using Lie symmetries) and Osinga and Moehlis [10] (exploiting numerical continuation). More recent work by Mauroy and Mezic [11] is especially appealing as it uses a simple forward integration algorithm, as illustrated in Fig. 1 for a Stuart–Landau oscillator. However, it is more common to side-step the need for constructing global isochrons by restricting attention to a small neighbourhood of the limit cycle, where dynamics can simply be recast in the reduced form θ ˙ =1, where θ is the phase around a cycle. This reduction to a phase description gives a nice simple dynamical system, albeit one that cannot describe evolution of trajectories in phase-space that are far away from the limit cycle. However, the phase reduction formalism is useful in quantifying how a system (on or close to a cycle) responds to weak forcing, via the construction of the infinitesimal phase response curve (iPRC). For a given high dimensional conductance based model this can be solved for numerically, though for some normal form descriptions closed form solutions are also known [12]. The iPRC at a point on cycle is equal to the gradient of the (isochronal) phase at that point. This approach forms the basis for constructing models of weakly interacting oscillators, where the external forcing is pictured as a function of the phase of a firing neuron. This has led to a great deal of work on phase-locking and central pattern generation in neural circuitry (see, for example [13]). Note that the work in [9] goes beyond the notion of iPRC and introduces infinitesimal phase response surfaces (allowing evaluation of phase advancement even when the stimulus is off cycle), and see also the work in [14] on non-infinitesimal PRCs.

Fig. 1
figure 1

Isochrons of a Stuart–Landau oscillator model: x ˙ =λx/2(λc/2+ω)yλ( x 2 + y 2 )(xcy)/2, y ˙ =(λc/2+ω)x+λy/2λ( x 2 + y 2 )(cx+y)/2. The black curve represents the periodic orbit of the system, which is simply the unit circle for this model. The blue curves are the isochrons obtained using the (forward) approach of Mauroy and Mezic [11]. The green dots are analytically obtained isochronal points [15]. Parameter values are λ=2.0, c=1.0, and ω=1.0

The assumption that phase alone is enough to capture the essentials of neural response is one made more for mathematical convenience than being physiologically motivated. Indeed, for the popular type I Morris–Lecar (ML) firing model with standard parameters, direct numerical simulations with pulsatile forcing show responses that cannot be explained solely with a phase model [16]. The failure of a phase description is in itself no surprise and underlies why the community emphasises the use of the word weakly in the phrase “weakly connected neural networks”. Indeed, there are a number of potential pitfalls when applying phase reduction techniques to a system that is not in a weakly forced regime. The typical construction of the phase response curve uses only linear information about the isochrons and non-linear effects will come into play the further we move away from the limit cycle. This problem can be diminished by taking higher order approximations to the isochrons and using this information in the construction of a higher order PRC [17]. Even using perfect information about isochrons, the phase reduction still assumes persistence of the limit-cycle and instantaneous relaxation back to cycle. However, the presence of nearby invariant phase-space structures such as (unstable) fixed points and invariant manifolds may result in trajectories spending long periods of time away from the limit cycle. Moreover, strong forcing will necessarily take one away from the neighbourhood of a cycle where a phase description is expected to hold. Thus, developing a reduced description, which captures some notion of distance from cycle is a key component of any theory of forced limit cycle oscillators. The development of phase-amplitude models that better characterise the response of popular high dimensional single neuron models is precisely the topic of this paper. Given that it is a major challenge to construct an isochronal foliation we use non-isochronal phase-amplitude coordinates as a practical method for obtaining a more accurate description of neural systems. Recently, Medvedev [18] has used this approach to understand in more detail the synchronisation of linearly coupled stochastic limit cycle oscillators.

In Sect. 2, we consider a general coordinate transformation, which recasts the dynamics of a system in terms of phase-amplitude coordinates. This approach is directly taken from the classical theory for analysing periodic orbits of ODEs, originally considered for planar systems in [19], and for general systems in [20]. We advocate it here as one way to move beyond a purely phase-centric perspective. We illustrate the transformation by applying it to a range of popular neuron models. In Sect. 3, we consider how inputs to the neuron are transformed under these coordinate transformations and derive the evolution equations for the forced phase-amplitude system. This reduces to the standard phase description in the appropriate limit. Importantly, we show that the behaviour of the phase-amplitude system is much more able to capture that of the original single neuron model from which it is derived. Focusing on pulsatile forcing, we explore the conditions for neural oscillator models to exhibit shear induced chaos [16]. Finally in Sect. 4, we discuss the relevance of this work to developing a theory of network dynamics that can improve upon the standard weak coupling approach.

2 Phase-Amplitude Coordinates

Throughout this paper, we study the dynamics prescribed by the system x ˙ =f(x), x R n , with solutions x=x(t) that satisfy x(0)= x 0 R n . We will assume that the system admits an attracting hyperbolic periodic orbit (namely one zero Floquet exponent and the others having negative real part), with period Δ, such that u(t)=u(t+Δ). A phase θ[0,Δ) is naturally defined from θ(u(t))=tmodΔ. It has long been known in the dynamical systems community how to construct a coordinate system based on this notion of phase as well as a distance from cycle; see [20] for a discussion. In fact, Ermentrout and Kopell [21] made good use of this approach to derive the phase-interaction function for networks of weakly connected limit-cycle oscillators in the limit of infinitely fast attraction to cycle. However, this assumption is particularly extreme and unlikely to hold for a broad class of single neuron models. Thus, it is interesting to return to the full phase-amplitude description. In essence, the transformation to these coordinates involves setting up a moving orthonormal system around the limit cycle. One axis of this system is chosen to be in the direction of the tangent vector along the orbit, and the remaining are chosen to be orthogonal. We introduce the normalised tangent vector ξ as

ξ(θ)= d u d θ / | d u d θ | .
(1)

The remaining coordinate axes are conveniently grouped together as the columns of an n×(n1) matrix ζ. In this case we can write an arbitrary point x as

x(θ,ρ)=u(θ)+ζ(θ)ρ.
(2)

Here, |ρ| represents the Euclidean distance from the limit cycle. A caricature, in R 2 , of the coordinate system along an orbit segment is shown in Fig. 2. Through the use of the variable ρ, we can consider points away from the periodic orbit. Rather than being isochronal, lines of constant θ are simply straight lines that emanating from a point on the orbit in the direction of the normal. The technical details of specifying the orthonormal coordinates forming ζ are discussed in Appendix A.

Fig. 2
figure 2

Demonstration of the moving orthonormal coordinate system along an orbit segment. As t evolves from t 0 to t 1 , the coordinates vary smoothly. In this planar example, ζ always points to the outside of the orbit

Upon projecting the dynamics onto the moving orthonormal system, we obtain the dynamics of the transformed system:

θ ˙ =1+ f 1 (θ,ρ), ρ ˙ =A(θ)ρ+ f 2 (θ,ρ),
(3)

where

f 1 (θ,ρ)= h T (θ,ρ) d ζ d θ ρ+ h T (θ,ρ) [ f ( u + ζ ρ ) f ( u ) ] ,
(4)
f 2 (θ,ρ)= ζ T d ζ d θ ρ f 1 + ζ T [ f ( u + ζ ρ ) f ( u ) D f ζ ρ ] ,
(5)
h(θ,ρ)= [ | d u d θ | + ξ T d ζ d θ ρ ] 1 ξ,A(θ)= ζ T [ d ζ d θ + D f ζ ] ,
(6)

and Df is the Jacobian of the vector field f evaluated along the periodic orbit u. The derivation of this system may be found in Appendix A. It is straightforward to show that f 1 (θ,ρ)0 as |ρ|0, f 2 (θ,0)=0 and that f 2 (θ,0)/ρ=0. In the above, f 1 captures the shear present in the system, that is, whether the speed of θ increases or decreases dependent on the distance from cycle. A precise definition for shear is given in [22]. Additionally, A(θ) describes the θ-dependent rate of attraction or repulsion from cycle.

It is pertinent to consider where this coordinate transformation breaks down, that is, where the determinant of the Jacobian of the transformation K=det[x/θx/ρ] vanishes. This never vanishes on-cycle (where ρ=0), but may do so for some |ρ|=k>0. This sets an upper bound on how far away from the limit cycle we can describe the system using these phase-amplitude coordinates. In Fig. 3, we plot the curve along which the transformation breaks down for the ML model. We observe that, for some values of θ, k is relatively smaller. The breakdown occurs where lines of constant θ cross, and thus the transformation ceases to be invertible, and these values of θ correspond to points along which the orbit has high curvature. We note that this issue is less problematical in higher dimensional models.

Fig. 3
figure 3

This figure shows the determinant K of the phase-amplitude transformation for the ML model. Colours indicate the value of K. The red contour indicates where K=0, and thus where the coordinate transformation breaks down. Note how all of the values for which this occurs have ρ<0. Parameter values as in Appendix B.1

If we now consider the driven system,

x ˙ =f(x)+εg(x,t),x R n ,
(7)

where ε is not necessarily small, we may apply the same transformation as above to obtain the dynamics in (θ,ρ) coordinates, where θ[0,Δ) and ρ R n 1 , as

θ ˙ =1+ f 1 (θ,ρ)+ε h T (θ,ρ)g ( u ( θ ) + ζ ( θ ) ρ , t ) ,
(8)
ρ ˙ =A(θ)ρ+ f 2 (θ,ρ)+ε ζ T B(θ,ρ)g ( u ( θ ) + ζ ( θ ) ρ , t ) ,
(9)

with

B(θ,ρ)= I n d ζ d θ ρ h T (θ,ρ),
(10)

and I n is the n×n identity matrix. Here, h and B describe the effect in terms of θ and ρ that the perturbations have. Details of the derivation are given in Appendix A. For planar models, B= I 2 . To demonstrate the application of the above coordinate transformation, we now consider some popular single neuron models.

2.1 A 2D Conductance Based Model

The ML model was originally developed to describe the voltage dynamics of barnacle giant muscle fibre [23], and is now a popular modelling choice in computational neuroscience [7]. It is written as a pair of coupled non-linear ODEs of the form

C v ˙ = I ( t ) g l ( v v l ) g K w ( v v K ) g Ca m ( v ) ( v v Ca ) , w ˙ = ϕ ( w ( v ) w ) / τ w ( v ) .
(11)

Here, v is the membrane voltage, whilst w is a gating variable, describing the fraction of membrane ion channels that are open at any time. The first equation expresses Kirchoff’s current law across the cell membrane, with I(t) representing a stimulus in the form of an injected current. The detailed form of the model is completed in Appendix B.1. The ML model has a very rich bifurcation structure. Roughly speaking, by varying a constant current I(t) I 0 , one observes, in different parameter regions, dynamical regimes corresponding to sinks, limit cycles, and Hopf, saddle-node and homoclinic bifurcations, as well as combinations of the above. These scenarios are discussed in detail in [7] and [24].

As the ML model is planar, ρ is a scalar, as are the functions A and f 1 , 2 . This allows us to use the moving coordinate system to clearly visualise parts of phase space where trajectories are attracted towards the limit cycle, and parts in which they move away from it, as illustrated in Fig. 4. The functions f 1 , 2 and A, evaluated at ρ=0.1 are shown in Fig. 5. The evolution of θ is mostly constant, however we clearly observe portions of the trajectories where this is slowed, along which ρ ˙ 0. In fact, this corresponds to where trajectories pass near to the saddle node, and the dynamics stall. This occurs around θ=17, and in Fig. 5 we see that both A(θ) and f 1 (θ,ρ) are indeed close to 0. The reduced velocities of trajectories here highlights the importance of considering other phase space structures in forced systems, the details of which are missed in standard phase only models. Forcing in the presence of such structures may give rise to complex and even chaotic behaviours, as we shall see in Sect. 3.

Fig. 4
figure 4

Typical trajectory of the ML model of the transformed system. Left: Time evolution of θ and ρ. Right: Trajectory plotted in the (v,w) phase plane. We see that when ρ has a local maximum, the evolution of θ slows down, corresponding to where trajectories pass near to the saddle node. Parameter values as in Appendix B.1

Fig. 5
figure 5

f 1 , f 2 , and A for the ML model, evaluated at ρ=0.1. We clearly see the difference in the order of magnitude between f 1 and f 2 for small ρ. Note that, although the average of A over one period is negative, it is positive for a non-trivial interval of θ. This corresponds to movement close to the stable manifold of the saddle node. Parameter values as in Appendix B.1

In the next example, we show how the same ideas go across to higher dimensional models.

2.2 A 4D Conductance Based Model

The Connor–Stevens (CS) model [25] is built upon the Hodgkin–Huxley formalism and comprises a fast Na+ current, a delayed K+ current, a leak current and a transient K+ current, termed the A-current. The full CS model consists of 6 equations: the membrane potential, the original Hodgkin–Huxley gating variables, and an activating and inactivating gating variable for the A-current. Using the method of equivalent potentials [26], we may reduce the dimensionality of the system, to include only 4 variables. The reduced system is

C v ˙ = F ( v , u , a , b ) + I , u ˙ = G ( v , u ) , X ˙ = X ( v ) X τ X ( v ) , X { a , b } ,
(12)

where

F ( v , u , a , b ) = g K n 4 ( u ) ( v v K ) + g Na h ( u ) m 3 ( v ) ( v v Na ) + g l l ( v v l ) + g a a 3 b ( v v a ) .
(13)

The details of the reduced CS model are completed in Appendix B.2. The solutions to the reduced CS model under the coordinate transformation may be seen in Fig. 6, whilst, in Fig. 7, we show how this solution looks in the original coordinates. As for the ML model, θ evolves approximately constantly throughout, though this evolution is sped up close to θ=Δ. The trajectories of the vector ρ are more complicated, but note that there is regularity in the pattern exhibited, and that this occurs with approximately the same period as the underlying limit cycle. The damping of the amplitude of oscillations in ρ over successive periods represents the overall attraction to the limit cycle, whilst the regular behaviour of ρ represents the specific relaxation to cycle as shown in Fig. 7. Additional file 1 shows a movie of the trajectory in (v,u,b) coordinates with the moving orthonormal system superimposed, as well as the solution in ρ for comparison.

Fig. 6
figure 6

Solution of the transformed CS system. Top: Time evolution of θ. Bottom: Time evolution of ρ coordinates. Upon transforming these coordinates back to the original ones, we arrive at Fig. 7. Parameter values given in Appendix B.2. In this parameter regime, the model exhibits type I firing dynamics

Fig. 7
figure 7

Transformed trajectory in (v,u,b) space of the phase-amplitude description of the reduced CS model. The dotted black line is the phase amplitude solution transformed in the original coordinates, whilst the coloured orbit is the underlying periodic orbit, where the colour corresponds to the phase along the orbit. The solution of the phase-amplitude description of the model in (θ,ρ) coordinates is shown in Fig. 6

3 Pulsatile Forcing of Phase-Amplitude Oscillators

We now consider a system with time-dependent forcing, given by (7) with

g(x,t)= n Z ( δ ( t n T ) , 0 , , 0 ) T ,
(14)

where δ is the Dirac δ-function. This describes T-periodic kicks to the voltage variable. Even such a simple forcing paradigm can give rise to rich dynamics [16]. For the periodically kicked ML model, shear forces can lead to chaotic dynamics as folds and horseshoes accumulate under the forcing. This means that the response of the neuron is extremely sensitive to the initial phase when the kicks occur. In terms of neural response, this means that the neuron is unreliable [27].

The behaviour of oscillators under such periodic pulsatile forcing is the subject of a number of studies; see, e.g. [2730]. Of particular relevance here is [27], in which a qualitative reasoning of the mechanisms that bring about shear in such models is supplemented by direct numerical simulations to detect the presence of chaotic solutions. For the ML model in a parameter region close to the homoclinic regime, kicks can cause trajectories to pass near the saddle-node, and folds may occur as a result [16].

We here would like to compare full planar neural models to the simple model, studied in [27]:

θ ˙ =1+σρ, ρ ˙ =λρ+εP(θ) n Z δ(tnT).
(15)

This system exhibits dynamical shear, which under certain conditions, can lead to chaotic dynamics. The shear parameter σ dictates how much trajectories are ‘sped up’ or ‘slowed down’ dependent on their distance from the limit cycle, whilst λ is the rate of attraction back to the limit cycle, which is independent of θ. Supposing that the function P is smooth but non-constant, trajectories will be taken a variable distance from the cycle upon the application of the kick. When kicks are repeated, this geometric mechanism can lead to repeated stretching and folding of phase space. It is clear that the larger σ is in (15), the more shear is present, and the more likely we are to observe the folding effect. In a similar way, smaller values of λ mean that the shear has longer to act upon trajectories and again result in a greater likelihood of chaos. Finally, to observe chaotic response, we must ensure that the shear forces have sufficient time to act, meaning that T, the time between kicks must not be too small.

This stretching and folding action can clearly lead to the formation of Smale horseshoes, which are well known to lead to a type of chaotic behaviour. However, horseshoes may co-exist with sinks, meaning the resulting chaotic dynamics would be transient. Wang and Young proved that under appropriate conditions, there is a set of T of positive Lebesgue measure for which the system experiences a stronger form of sustained, chaotic behaviour, characterised by the existence of a positive Lyapunov exponent for almost all initial conditions and the existence of a ‘strange attractor’; see, e.g. [2830].

By comparing with the phase-amplitude dynamics described by Eqs. (8)–(9), we see that the model of shear considered in (15) is a proxy for a more general system, with f 1 (θ,ρ)σρ, A(θ)λ and h(θ,ρ)0, and ζ(θ)P(θ).

To gain a deeper insight into the phenomenon of shear induced chaos, it is pertinent to study the isochrons of the limit cycle for the linear model (15), where the isochrons are simply lines with slope λ/σ. In Fig. 8, we depict the isochrons and stretch and fold action of shear. The bold thin grey line at ρ=0 is kicked, at t= t 0 , to the bold solid curve, where P(θ)=sin(θ), as studied in [16] and this curve is allowed to evolve under the dynamics with no further kicks through the dashed curve at t= t 1 and ultimately to the dotted curve at t= t 2 , which may be considered as evolutions of the solid curve by integer multiples of the natural period of the oscillator. Every point of the dotted curve traverses the isochron it is on at t 0 until t 2 . The green marker shows an example of one such point evolving along its associated isochron. The folding effect of this is clear in the figure, and further illustrated in the video in Additional file 2.

Fig. 8
figure 8

Stretch-and-fold action of a kick followed by relaxation in the presence of shear. The thin black lines are the isochrons of the system, which in the case of the linear model (15), are simply straight lines with slope λ/σ. The thin grey line at ρ=0 represents the limit cycle, which is kicked, at t= t 0 by P(θ)=sin(θ) with strength ε=1 to the solid curve. After this, the orbits are allowed to evolve under the flow generated by the continuous part of the system. The dashed and dotted curves represent the image of the kicked solid curve under this flow, at times t 1 and t 2 , respectively. The green marker shows how one point, x( t 0 ) evolves under the flow, first to x( t 1 ) and then to x( t 2 ), following the isochron as it relaxes back to the limit cycle. The effect of the shear forces and the subsequent folding, caricatured by the blue arrows can clearly be seen

This simple model with a harmonic form for P(θ) provides insight into how strange attractors can be formed. Kicks along the isochrons or ones that map isochrons to one another will not produce strange attractors, but merely phase-shifts. What causes the stretching and folding is the variation in how far points are moved as measured in the direction transverse to the isochrons. For the linear system (15) variation in this sense is generated by any non-constant P(θ); the larger the ratio σε/λ, the larger the variation (see [16] for a recent discussion).

The formation of chaos in the ML model is shown in Fig. 9. Here, we plot the response to periodic pulsatile forcing, given by (14), in the (v,w) coordinate system. This clearly illustrates a folding of phase space around the limit-cycle, and is further portrayed in the video in Additional file 3. We now show how this can be understood using phase-amplitude coordinates.

Fig. 9
figure 9

Shear induced folding in the ML model, with parameters as in Fig. 5. The red curve in all panels represents the limit cycle of the unperturbed system, whilst the green dotted line represents the stable manifold of the saddle node, indicated by the orange marker. We begin distributing points along the limit cycle and then apply an instantaneous kick taking vv+ε, where ε=2.0, leaving w unchanged. This essentially moves all phase points to the left, to the blue curve. The successive panels show the image of this set of points after letting points evolve freely under the system defined by the ML equations, and then apply the kick again. The curves shown are the images of the initial phase points just after each kick, as indicated in the figure. We can clearly observe the shear induced folding. Parameter values as in Appendix B.1

Compared to the phenomenological system (15), models written in phase-amplitude coordinates as (8)–(9) have two main differences. The intrinsic dynamics (without kicks) are non-linear, and the kick terms appear in both equations for θ ˙ and ρ ˙ (not just ρ ˙ ). Simulations of (8)–(9) for both the FHN and ML models, with ε=0.1, show that the replacement of f 1 (θ,ρ) by σρ, dropping f 2 (θ,ρ) (which is quadratic in ρ), and setting A(θ)=λ does not lead to any significant qualitative change in behaviour (for a wide range of σ,λ>0). We therefore conclude that, at least when the kick amplitude ε is not too large, it is more important to focus on the form of the forcing in the phase-amplitude coordinates. In what follows, we are interested in discovering the effects of different functional forms of the forcing term multiplying the δ-function, keeping other factors fixed. As examples, we choose those forcing terms given by transforming the FHN and the ML models into phase-amplitude coordinates. To find these functions, we first find the attracting limit cycle solution in the ML model (11) and FHN model (52) using a periodic boundary value problem solver and set up the orthonormal coordinate system around this limit cycle. Once the coordinate system is established, we evaluate the functions h(θ,ρ) and B(θ,ρ) (that appear in Eqs. (8) and (9)). For planar systems, we have simply that B(θ,ρ)= I 2 . Using the forcing term (14), we are only considering perturbations to the voltage component of our system and thus only the first component of h, and the first column of B will make a non-trivial contribution to the dynamics. We define P 1 as the first component of h and P 2 as the first component of ζ. We wish to force each system at the same ratio of the natural frequency of the underlying periodic orbit. To ease comparison between the system with the ML forcing terms and the FHN forcing terms, we rescale θθ/Δ so that θ[0,1) in what follows. Implementing the above choices leads to

θ ˙ = 1 + σ ρ + ε P 1 ( θ , ρ ) n Z δ ( t n T ) , ρ ˙ = λ ρ + ε P 2 ( θ ) n Z δ ( t n T ) .
(16)

It is important to emphasise that P 1 , 2 are determined by the underlying single neuron model (unlike in the toy model (15)). As emphasised in [31], one must take care in the treatment of the state dependent ‘jumps’ caused by the δ-functions in (16) to accommodate the discontinuous nature of θ and ρ at the time of the kick. To solve (16), we approximate δ(t) with a normalised square pulse δ τ (t) of the form

δ τ (t)={ 0 , t 0 , 1 / τ , 0 < t τ , 0 , t > τ ,
(17)

where τ1. This means that for (n1)T+τ<tnT, nZ, the dynamics are governed by the linear system ( θ ˙ , ρ ˙ )=(1+σρ,λρ). This can be integrated to obtain the state of the system just before the arrival of the next kick, ( θ n , ρ n )(θ(nT),ρ(nT)), in the form

θ n = [ θ ( t ) + n T t + σ λ ρ ( t ) ( 1 e λ ( n T t ) ) ] mod1,
(18)
ρ n =ρ(t) e λ ( n T t ) .
(19)

In the interval nT<t<nT+τ and using (17) we now need to solve the system of non-linear ODEs

θ ˙ =1+σρ+ ε τ P 1 (θ,ρ), ρ ˙ =λρ+ ε τ P 2 (θ).
(20)

Rescaling time as t=nT+τs, with 0s1, and writing the solution (θ,ρ) as a regular perturbation expansion in powers of τ as (θ(s),ρ(s))=( θ 0 (s)+τ θ 1 (s), ρ 0 (s)+τ ρ 1 (s))+ , we find after collecting terms of leading order in τ that the pair ( θ 0 (s), ρ 0 (s)) is governed by

d θ 0 d s =ε P 1 ( θ 0 ( s ) , ρ 0 ( s ) ) , d ρ 0 d s =ε P 2 ( θ 0 ( s ) ) ,0s1,
(21)

with initial conditions ( θ 0 (0), ρ 0 (0))=( θ n , ρ n ). The solution ( θ 0 (1), ρ 0 (1))( θ n + , ρ n + ) (obtained numerically) can then be taken as initial data (θ(t),ρ(t))=( θ n + , ρ n + ) in (18)–(19) to form the stroboscopic map

θ n + 1 = [ θ n + + T + σ λ ρ n + ( 1 e λ T ) ] mod1,
(22)
ρ n + 1 = ρ n + e λ T .
(23)

Note that this has been constructed using a matched asymptotic expansion, using (17), and is valid in the limit τ0. For weak forcing, where ε1, P 1 , 2 vary slowly through the kick and can be approximated by their values at ( θ n , ρ n ) so that to O( ε 2 )

θ n + 1 = [ θ n + T + ε P 1 ( θ n , ρ n ) + σ λ ( ρ n + ε P 2 ( θ n ) ) ( 1 e λ T ) ] mod1,
(24)
ρ n + 1 = ( ρ n + ε P 2 ( θ n ) ) e λ T .
(25)

Although this explicit map is convenient for numerical simulations, we prefer to work with the full stroboscopic map (22)–(23), which is particularly useful for comparing and contrasting the behaviour of different planar single neuron models with arbitrary kick strength. As an indication of the presence of chaos in the dynamics resulting from this system, we evaluate the largest Lyapunov exponent of the map (22)–(23) by numerically evolving a tangent vector and computing its rate of growth (or contraction); see e.g. [32] for details.

In Fig. 10, we compare the functions P 1 , 2 for both the FHN and the ML models. We note that P 2 for the FHN model is near 0 for a large set of θ, whilst the same is true for P 1 for the ML model. This means that kicks in the FHN model will tend to primarily cause phase shifts, whilst the same kicks in the ML model will primarily cause shifts in amplitude.

Fig. 10
figure 10

The blue curves show the change in θ under the action of a pulsatile kick in v, whilst the red dashed curves show the change in ρ under the same kick. The top plot is for the FHN model, whilst the bottom plot is for the ML model. We evaluate the effect of the kicks at ρ n =0, where we observe the largest changes in θ under the action of kicks

We plot in the top row of Fig. 11 the pair ( θ n , θ n + 1 ), for (24)–(25) for the FHN and ML models. For the FHN model, we find that the system has Lyapunov exponent of 0.0515<0. For the ML model the Lyapunov exponent is 0.6738>0. This implies that differences in the functional forms of P 1 , 2 can help to explain the generation of chaos.

Fig. 11
figure 11

Panel (a) shows successive iterates of θ in system (22)–(23) with functions P 1 , 2 taken from the FHN model, whilst panel (b) presents the same iterates but for functions P 1 , 2 from the ML model. Panel (c) shows P 1 , 2 for the FHN model, where the bold blue line is P 1 and the red dashed line is P 2 . Superimposed on this panel is a histogram displaying how kicks are distributed in terms of θ alone. Panel (d) shows the same information, except this time for forcing functions from the ML model. Parameter values are σ=3.0, λ=0.1, ε=0.1, and T=2.0

Now that we know the relative contribution of kicks in v to kicks in (θ,ρ), it is also useful to know where kicks actually occur in terms of θ as this will determine the contribution of a train of kicks to the (θ,ρ) dynamics. In Figs. 11c and d, we plot the distribution of kicks as a function of θ. For the ML model, we observe that the kicks are distributed over all phases, while for FHN model there is a grouping of kicks around the region where P 2 is roughly zero. This means that kicks will not be felt as much in the ρ variable, and so trajectories here do not get kicked far from cycle. This helps explain why it is more difficult to generate chaotic responses in the FHN model.

After transients, we observe a 1:1 phase-locked state for the FHN model. For a phase-locked state, small perturbations will ultimately decay as the perturbed trajectories also end up at the phase-locked state after some transient behaviour. This results in a negative largest Lyapunov exponent of −0.0515. We note the sharply peaked distribution of kick phases, which is to be expected for discrete-time systems possessing a negative largest Lyapunov exponent, since such systems tend to have sinks in this case. The phase-locked state here occurs where P 2 is small, suggesting that trajectories stay close to the limit cycle. Since kicks do not move trajectories away from cycle, there is no possibility of folding, and hence no chaotic behaviour. For the ML model, we observe chaotic dynamics around a strange attractor, where small perturbations can grow, leading to a positive largest Lyapunov exponent of 0.6738. This time, the kicks are distributed fairly uniformly across θ, and so, some kicks will take trajectories away from the limit cycle, thus leading to shear-induced folding and chaotic behaviour.

4 Discussion

In this paper, we have used the notion of a moving orthonormal coordinate system around a limit cycle to study dynamics in a neighbourhood around it. This phase-amplitude coordinate system can be constructed for any given ODE system supporting a limit cycle. A clear advantage of the transformed description over the original one is that it allows us to gain insight into the effect of time dependent perturbations, using the notion of shear, as we have illustrated by performing case studies of popular neural models, in two and higher dimensions. Whilst this coordinate transformation does not result in any reduction in dimensionality in the system, as is the case with classical phase reduction techniques, it opens up avenues for moving away from the weak coupling limit, where ε0. Importantly, it emphasises the role of the two functions P 1 (θ,ρ) and P 2 (θ) that provide more information about inputs to the system than the iPRC alone. It has been demonstrated that moderately small perturbations can exert remarkable influence on dynamics in the presence of other invariant structures [16], which cannot be captured by a phase only description. In addition, small perturbations can accumulate if the timescale of the perturbation is shorter than the timescale of attraction back to the limit cycle. This should be given particular consideration in the analysis of neural systems, where oscillators may be connected to thousands of other units, so that small inputs can quickly accumulate.

One natural extension of this work is to move beyond the theory of weakly coupled oscillators to develop a framework for describing neural systems as networks of phase-amplitude units. This has previously been considered for the case of weakly coupled weakly dissipative networks of non-linear planar oscillators (modelled by small dissipative perturbations of a Hamiltonian oscillator) [3335]. It would be interesting to develop these ideas and obtain network descriptions of the following type:

θ ˙ i =1+ f 1 ( θ i , ρ i )+ j w i j H 1 ( θ i , θ j , ρ i , ρ j ),
(26)
ρ ˙ i =A( θ i ) ρ i + j w i j H 2 ( θ i , θ j , ρ i , ρ j ),
(27)

with an appropriate identification of the interaction functions H 1 , 2 in terms of the biological interaction between neurons and the single neuron functions P 1 , 2 . Such phase-amplitude network models are ideally suited to describing the behaviour of the mean-field signal in networks of strongly gap junction coupled ML neurons [36, 37], which is known to vary because individual neurons make transitions between cycles of different amplitudes. Moreover, in the same network weakly coupled oscillator theory fails to explain how the synchronous state can stabilise with increasing coupling strength (predicting that it is always unstable), as observed numerically. All of the above are topics of ongoing research and will be reported upon elsewhere.

Appendix A: Derivation of the Transformed Dynamical System

Starting from

x ˙ =f(x)+εg(x,t),
(28)

we make the transformation x(t)=u(θ(t))+ζ(θ(t))ρ(t), giving

[ d u ( θ ) d θ + d ζ ( θ ) d θ ρ ] θ ˙ +ζ(θ) ρ ˙ =f ( u ( θ ) + ζ ( θ ) ρ ) +εg(u ( θ + ζ ( θ ) ρ , t ) .
(29)

We proceed by projecting (29) onto ξ(θ), using (1). The left-hand side of (29) now reads:

[ | d u d θ | + ξ T d ζ d θ ρ ] d θ d t ,
(30)

where ξ T denotes the transpose of ξ and the right-hand side of (29) becomes

ξ T f ( u + ζ ρ ) + ε ξ T g ( u + ζ ρ ) = [ | d u d θ | + ξ T d ζ d θ ρ ] + ξ T f ( u + ζ ρ ) ξ T f ( u ) ξ T d ζ d θ ρ + ε ξ T g ( u + ζ ρ , t ) .
(31)

Thus,

θ ˙ =1+ f 1 (θ,ρ)+ε h T (θ,ρ)g ( u ( θ ) + ζ ( θ , t ) ρ , t ) ,
(32)

where

h(θ,ρ)= [ | d u d θ | + ξ T d ζ d θ ρ ] 1 ξ(θ),
(33)

and

f 1 (θ,ρ)= h T (θ,ρ) d ζ d θ ρ(θ)+ h T (θ,ρ) [ f ( u + ζ ρ ) f ( u ) ] .
(34)

Upon projecting both sides of (29) onto ζ(θ), the left-hand side reads

ζ T [ d u d θ + d ζ d θ ] d θ d t + d ρ d t = ζ T d ζ d θ ρ d θ d t + d ρ d t = ζ T d ζ d θ ρ [ 1 + f 1 ( θ , ρ ) + ε h T ( θ , ρ ) g ( u + ζ ρ , t ) ] + d ρ d t ,
(35)

whilst the right-hand side becomes

ζ T f ( u + ζ ρ ) + ε ζ T g ( u + ζ ρ , t ) = ζ T f ( u ) + ζ T D f ζ ζ T D f ζ + ζ T f ( u + ζ ρ ) + ε g ( u + ζ ρ , t ) ,
(36)

since ζ T f(u)= ζ T du/dθ=0 and where Df denotes the Jacobian of f. Putting together the previous two equations yields

ρ ˙ =A(θ)ρ+ f 2 (θ,ρ)+ε ζ T [ I d ζ ( θ ) d θ ρ h ( θ , ρ ) ] g ( u ( θ ) + ζ ( θ ) ρ , t ) ,
(37)

where

A(θ)= ζ T [ d ζ d θ + D f ζ ] ,
(38)
f 2 (θ,ρ)= ζ T d ζ d θ ρ f 1 + ζ T [ f ( u + ζ ρ ) f ( u ) D f ζ ρ ] .
(39)

It may be easily seen that f 1 (θ,ρ)=O(ρ) as ρ0 and that f 2 (θ,0)=0 and f 2 (θ,0)/ρ=0. Overall, combining (32) and (37) we arrive at the transformed system:

θ ˙ = 1 + f 1 ( θ , ρ ) + ε h T ( θ , ρ ) g ( u ( θ ) + ζ ( θ , t ) ρ , t ) , ρ ˙ = A ( θ ) ρ + f 2 ( θ , ρ ) + ε ζ T [ I n d ζ ( θ ) d θ ρ h ( θ , ρ ) ] g ( u ( θ ) + ζ ( θ ) ρ , t ) .
(40)

In order to evaluate the functions f 1 , f 2 , and A for models with dimension larger than two, we need to calculate dζ/dθ. Defining by γ i (θ), the direction angles of ξ(θ), we have that

ζ i = e i cos γ i 1 + cos γ 1 ( e 1 + ξ ( θ ) ) = e i e i ξ ( θ ) 1 + e 1 ξ ( θ ) ( e 1 + ξ ( θ ) ) , i = 2 , 3 , , n ,
(41)

where the index i denotes the column entry of ζ and xy denotes the dot product between vectors x and y. Defining

u i (θ)= e i ξ ( θ ) 1 + e 1 ξ ( θ ) ,
(42)

and

w j (θ)=1+ e 1 , j ξ j (θ),
(43)

where j denotes the row index, we have

d ζ i , j d θ =u d w j d θ w j d u i d θ .
(44)

By the quotient rule for vectors we find that

d u i d θ = ( 1 + e 1 ) ( e i ( d ξ / d θ ) ) ( e i ξ ( θ ) ) ( e 1 ( d ξ / d θ ) ) ( 1 + e 1 ξ ( θ ) ) 2 ,
(45)

and that

d w j d θ = d ξ j d θ .
(46)

Overall, we have that

d ζ i , j d θ = e i ξ ( θ ) 1 + e 1 ξ ( θ ) ( e 1 , j + ξ j ( θ ) ) × ( ( 1 + e 1 ) ( e i ( d ξ / d θ ) ) ( e i ξ ( θ ) ) ( e 1 ( d ξ / d θ ) ) ( 1 + e 1 ξ ( θ ) ) 2 ) .
(47)

Appendix B: Gallery of Models

B.1 Morris–Lecar

The ML equations describe the interaction of membrane voltage with just two ionic currents: Ca 2 + and K + . Membrane ion channels are selective for specific types of ions; their dynamics are modelled here by the gating variable w and the auxiliary functions w , τ w , and m . The latter have the form

m ( v ) = 1 2 [ 1 + tanh ( ( v v 1 ) / v 2 ) ] , τ w ( v ) = 1 / cosh ( ( v v 3 ) / ( 2 v 4 ) ) , w ( v ) = 1 2 [ 1 + tanh ( ( v v 3 ) / v 4 ) ] .
(48)

The function m (v) models the action of fast voltage-gated calcium ion channels; v Ca is the reversal (bias) potential for the calcium current and g Ca the corresponding conductance. The functions τ w (v) and w (v) similarly describe the dynamics of slower-acting potassium channels, with its own reversal potential v K and conductance g K . The constants v leak and g leak characterise the leakage current that is present even when the neuron is in a quiescent state. Parameter values are C=20.0  μF/cm 2 , g l =2.0  mmho/cm 2 , g K =8.0  mmho/cm 2 , g Ca =4.0  mmho/cm 2 , ϕ=0.23, I=39.5  μA/cm 2 , v l =60.0 mV, v K =84.0 mV, v Ca =120.0 mV, v 1 =1.2 mV, v 2 =18.0 mV, v 3 =12.0 mV, and v 4 =17.4 mV.

B.2 Reduced Connor–Stevens Model

For the reduced CS model, we start with the full Hodgkin–Huxley model, with m, n, h as gating variables and use the method of equivalent potentials as treated in [26], giving rise to the following form for the function g:

G ( v , u ) = ( F h [ h ( v ) h ( u ) τ h ( v ) ] + F n [ n ( v ) n ( u ) τ n ( v ) ] ) / ( f h d h ( u ) d u + f n d n ( u ) d u )
(49)

where F/h and F/n are evaluated at h= h (u) and n= n (u). For the gating variables (a,b), we have

a ( v ) = ( 0.0761 e ( v + 94.22 / 31.84 ) 1 + e ( v + 1.17 / 28.93 ) ) 1 / 3 , τ a ( v ) = 0.3632 + 1.158 1 + e ( v + 55.96 / 20.12 ) ,
(50)
b ( v ) = ( 1 1 + e ( v + 53.3 / 14.54 ) ) 4 , τ b ( v ) = 1.24 + 2.678 1 + e ( v + 50 / 16.027 ) .
(51)

Parameter values are C=1  μF/cm 2 , g l =0.3  mmho/cm 2 , g K =36.0  mmho/cm 2 , g a =47.7  mmho/cm 2 , I=35.0  μA/cm 2 , v 0 =80.0 mV, v a =75.0 mV, v K =77.0 mV, v l =54.4 mV, and v Na =50.0 mV.

B.3 FitzHugh–Nagumo Model

The FHN model is a phenomenological model of spike generation, comprising of 2 variables. The first represents the membrane potential and includes a cubic non-linearity, whilst the second variable is a gating variable, similar to w in the ML model, which may be thought of as a recovery variable. The system is

μ v ˙ =v(av)(v1)+Iw, w ˙ =vbw,
(52)

where we use the following parameter values: μ=0.05, a=0.9, I=1.1, and b=0.5.

Electronic Supplementary Material

Abbreviations

ML:

Morris–Lecar

FHN:

FitzHugh–Nagumo

CS:

Connor–Stevens

LE:

Lyapunov exponent

References

  1. Winfree A: The Geometry of Biological Time. 2nd edition. Springer, Berlin; 2001.

    Book  MATH  Google Scholar 

  2. Guckenheimer J: Isochrons and phaseless sets. J Math Biol 1975, 1: 259–273. 10.1007/BF01273747

    Article  MathSciNet  MATH  Google Scholar 

  3. Cohen AH, Rand RH, Holmes PJ: Systems of coupled oscillators as models of central pattern generators. In Neural Control of Rhythmic Movements in Vertebrates. Wiley, New York; 1988.

    Google Scholar 

  4. Kopell N, Ermentrout GB: Symmetry and phaselocking in chains of weakly coupled oscillators. Commun Pure Appl Math 1986, 39: 623–660. 10.1002/cpa.3160390504

    Article  MathSciNet  MATH  Google Scholar 

  5. Ermentrout GB: n:m phase-locking of weakly coupled oscillators. J Math Biol 1981, 12: 327–342. 10.1007/BF00276920

    Article  MathSciNet  MATH  Google Scholar 

  6. Izhikevich EM: Dynamical Systems in Neuroscience: The Geometry of Excitability and Bursting. MIT Press, Cambridge; 2007.

    Google Scholar 

  7. Ermentrout GB, Terman DH: Mathematical Foundations of Neuroscience. Springer, Berlin; 2010.

    Book  MATH  Google Scholar 

  8. Josic K, Shea-Brown ET, Moehlis J: Isochron. Scholarpedia 2006., 1(8): Article ID 1361 Article ID 1361

    Google Scholar 

  9. Guillamon A, Huguet G: A computational and geometric approach to phase resetting curves and surfaces. SIAM J Appl Dyn Syst 2009, 8(3):1005–1042. 10.1137/080737666

    Article  MathSciNet  MATH  Google Scholar 

  10. Osinga HM, Moehlis J: A continuation method for computing global isochrons. SIAM J Appl Dyn Syst 2010, 9(4):1201–1228. 10.1137/090777244

    Article  MathSciNet  MATH  Google Scholar 

  11. Mauroy A, Mezic I: On the use of Fourier averages to compute the global isochrons of (quasi)periodic dynamics. Chaos 2012., 22(3): Article ID 033112 Article ID 033112

    Google Scholar 

  12. Brown E, Moehlis J, Holmes P: On the phase reduction and response dynamics of neural oscillator populations. Neural Comput 2004, 16: 673–715. 10.1162/089976604322860668

    Article  MATH  Google Scholar 

  13. Hoppensteadt FC, Izhikevich EM: Weakly Connected Neural Networks. Springer, Berlin; 1997.

    Book  Google Scholar 

  14. Achuthan S, Canavier CC: Phase-resetting curves determine synchronization, phase locking, and clustering in networks of neural oscillators. J Neurosci 2009, 29(16):5218–5233. 10.1523/JNEUROSCI.0426-09.2009

    Article  Google Scholar 

  15. Yoshimura K: Phase reduction of stochastic limit-cycle oscillators. In Reviews of Nonlinear Dynamics and Complexity. Volume 3. Wiley, New York; 2010:59–90.

    Chapter  Google Scholar 

  16. Lin KK, Wedgwood KCA, Coombes S, Young LS: Limitations of perturbative techniques in the analysis of rhythms and oscillations. J Math Biol 2013, 66: 139–161. 10.1007/s00285-012-0506-0

    Article  MathSciNet  MATH  Google Scholar 

  17. Demir A, Suvak O: Quadratic approximations for the isochrons of oscillators: a general theory, advanced numerical methods and accurate phase computations. IEEE Trans Comput-Aided Des Integr Circuits Syst 2010, 29: 1215–1228.

    Article  Google Scholar 

  18. Medvedev GS: Synchronization of coupled stochastic limit cycle oscillators. Phys Lett A 2010, 374: 1712–1720. 10.1016/j.physleta.2010.02.031

    Article  MATH  Google Scholar 

  19. Diliberto SP: On systems of ordinary differential equations. Annals of Mathematical Studies 20. In Contributions to the Theory of Nonlinear Oscillations. Princeton University Press, Princeton; 1950:1–38.

    Google Scholar 

  20. Hale JK: Ordinary Differential Equations. Wiley, New York; 1969.

    MATH  Google Scholar 

  21. Ermentrout GB, Kopell N: Oscillator death in systems of coupled neural oscillators. SIAM J Appl Math 1990, 50: 125–146. 10.1137/0150009

    Article  MathSciNet  MATH  Google Scholar 

  22. Ott W, Stenlund M: From limit cycles to strange attractors. Commun Math Phys 2010, 296: 215–249. 10.1007/s00220-010-0994-y

    Article  MathSciNet  MATH  Google Scholar 

  23. Morris C, Lecar H: Voltage oscillations in the barnacle giant muscle fiber. Biophys J 1981, 35: 193–213. 10.1016/S0006-3495(81)84782-0

    Article  Google Scholar 

  24. Rinzel J, Ermentrout GB: Analysis of neural excitability and oscillations. In Methods in Neuronal Modeling. 1st edition. MIT Press, Cambridge; 1989:135–169.

    Google Scholar 

  25. Connor JA, Stevens CF: Prediction of repetitive firing behaviour from voltage clamp data on an isolated neurone soma. J Physiol 1971, 213: 31–53.

    Article  Google Scholar 

  26. Kepler TB, Abbott LF, Marder E: Reduction of conductance-based neuron models. Biol Cybern 1992, 66: 381–387. 10.1007/BF00197717

    Article  MATH  Google Scholar 

  27. Lin KK, Young LS: Shear-induced chaos. Nonlinearity 2008, 21(5):899–922. 10.1088/0951-7715/21/5/002

    Article  MathSciNet  MATH  Google Scholar 

  28. Wang Q, Young LS: Strange attractors with one direction of instability. Commun Math Phys 2001, 218: 1–97. 10.1007/s002200100379

    Article  MathSciNet  MATH  Google Scholar 

  29. Wang Q, Young LS: From invariant curves to strange attractors. Commun Math Phys 2002, 225: 275–304. 10.1007/s002200100582

    Article  MathSciNet  MATH  Google Scholar 

  30. Wang Q: Strange attractors in periodically-kicked limit cycles and Hopf bifurcations. Commun Math Phys 2003, 240: 509–529.

    Article  MATH  Google Scholar 

  31. Catllá AJ, Schaeffer DG, Witelski TP, Monson EE, Lin AL: On spiking models for synaptic activity and impulsive differential equations. SIAM Rev 2008, 50: 553–569. 10.1137/060667980

    Article  MathSciNet  MATH  Google Scholar 

  32. Christiansen F, Rugh F: Computing Lyapunov spectra with continuous Gram–Schmidt orthonormalization. Nonlinearity 1997, 10: 1063–1072. 10.1088/0951-7715/10/5/004

    Article  MathSciNet  MATH  Google Scholar 

  33. Ashwin P: Weak coupling of strongly nonlinear, weakly dissipative identical oscillators. Dyn Syst 1989, 10(3):2471–2474.

    Google Scholar 

  34. Ashwin P, Dangelmayr G: Isochronicity-induced bifurcations in systems of weakly dissipative coupled oscillators. Dyn Stab Syst 2000, 15(3):263–286. 10.1080/713603745

    Article  MathSciNet  MATH  Google Scholar 

  35. Ashwin P, Dangelmayr G: Reduced dynamics and symmetric solutions for globally coupled weakly dissipative oscillators. Dyn Syst 2005, 20(3):333–367. 10.1080/14689360500151813

    Article  MathSciNet  MATH  Google Scholar 

  36. Han SK, Kurrer C, Kuramoto Y: Dephasing and bursting in coupled neural oscillators. Phys Rev Lett 1995, 75: 3190–3193. 10.1103/PhysRevLett.75.3190

    Article  Google Scholar 

  37. Coombes S: Neuronal networks with gap junctions: a study of piecewise linear planar neuron models. SIAM J Appl Dyn Syst 2008, 7(3):1101–1129. 10.1137/070707579

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kyle CA Wedgwood.

Additional information

Competing Interests

The authors confirm that they have no competing interests of which they are aware.

Authors’ Contributions

KCAW, KKL, RT and SC contributed equally. All authors read and approved the final manuscript.

Electronic supplementary material

13408_2012_27_MOESM1_ESM.mov

Additional file 1: A movie showing the moving orthonormal system for the Connor–Stevens model. The top panel shows a projection of the moving orthonormal system from the full (v,u,a,b) space onto (v,u,b). Around the point γ(θ), where θ is the phase, we establish an orthonormal basis in a subspace of (v,u,b). As θ evolves, so does this coordinate system, as shown by the moving black lines, which represent the moving orthonormal basis. In this movie, we choose some initial conditions off cycle, shown by the blue orbit. The ρ coordinates along the moving coordinate system are shown in the bottom panel (MOV 649 kB) (MOV 648 KB)

13408_2012_27_MOESM2_ESM.mov

Additional file 2: A movie showing the stretch-and-fold action brought about by shear forces. In this movie, we show both the shear forces and the rate of attraction back to cycle are linear. The limit cycle is first unravelled so that it may be represented by a straight line. We choose P(θ)=sin(θ) as our forcing function and apply it instantaneously at t=0. We then allow the resulting image of the kicked orbit to evolve under the flow generated by system (15) between kicks until t=1 (in arbitrary units). As the curve relaxes back to the cycle, we see that the shear forcing causes a fold in the curve to develop. The accumulation of such folds over successive forcing periods can ultimately give rise to chaotic dynamics, which would not be observed in the corresponding phase-only model. The thinner black lines represent the isochrons of the system which, in this simple example, are straight lines with slope λ/σ. Since the isochrons of the system are unchanged between kicks, we observe that phase points simply traverse the isochron they are kicked to as they relax back to cycle (MOV 769 kB) (MOV 769 KB)

13408_2012_27_MOESM3_ESM.mov

Additional file 3: A movie showing the accumulation of folds in the kicked ML model. The thin black line represents the underlying periodic orbit of the system x ˙ =f(x), with f taken for the ML model. Every T units of time, we apply a kick taking vvA, where A=2.0, whilst leaving w unchanged, to all phase-points. The movie then shows the evolution of all of these phase-points. Please note that this movie does not show trajectories of the system, but the image of points starting on the limit cycle, under the action of the kick composed with the flow generated by x ˙ =f(x). This movie show the action of 4 such kicks. We observe that trajectories spend a long time near the saddle node to the bottom left of the figure, so that these trajectories travel slower than those close to the limit cycle. As we apply more kicks, we see the folds developing and accumulating (MOV 2118 kB) (MOV 2 MB)

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Wedgwood, K.C., Lin, K.K., Thul, R. et al. Phase-Amplitude Descriptions of Neural Oscillator Models. J. Math. Neurosc. 3, 2 (2013). https://doi.org/10.1186/2190-8567-3-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2190-8567-3-2

Keywords