FitzHugh - Nagumo model implementation


Good morning,
I’m starting to use nengo and I would like to implement a new neuron, and in particular the FitzHugh - Nagumo (FHN) model. I’ve seen and used as guideline the rectified linear example and this is very helpful, but I have some problems.
As first, I have implemented the neuron class.

class FHN(nengo.neurons.NeuronType): 
    #FitzHugh - Nagumo  neuron model.

    probeable = ('voltage', 'recovery')

    alpha = NumberParam('alpha')
    gamma = NumberParam('gamma')
    epsilon = NumberParam('epsilon')

    def __init__(self, alpha=-0.1, gamma=0.008,
        super(FHN, self).__init__()
        self.alpha = alpha
        self.gamma = gamma
        self.epsilon = epsilon
    def _argreprs(self):
        args = []

        def add(attr, default):
            if getattr(self, attr) != default:
                args.append("%s=%s" % (attr, getattr(self, attr)))
        add("alpha", -0.1)
        add("gamma", 0.008)
        add("epsilon", 0.01)
        return args

    def rates(self, x, gain, bias):
        """Estimates steady-state firing rate given gain and bias.

        Uses the `.settled_firingrate` helper function.
        J = self.current(x, gain, bias)
        voltage = np.zeros_like(J) 
        recovery = np.zeros_like(J)
        return settled_firingrate(self.step_math, J, [voltage, recovery],
                                 settle_time=0.001, sim_time=1.0)

    def step_math(self, dt, J, voltage, recovery):
        dV = (voltage * (voltage - self.alpha) * (1 - voltage) - recovery + J)
        voltage[:] += dV * dt

        dw = (self.epsilon * (voltage - self.gamma * recovery))
        recovery[:] += dw * dt 

Here I have two questions:

  • due to the fact that the FHN is a normalized model, should I normalize also the voltage input J ? (I have understood, maybe in the wrong way, that J is the voltage input in mV).
  • when I compile it, an error occurs in the rates method.
    If I use the settled_firingrate:
~\Anaconda3\lib\site-packages\nengo\utils\ in settled_firingrate(step_math, J, states, dt, settle_time, sim_time)
    164     steps = int(settle_time / dt)
    165     for _ in range(steps):
--> 166         step_math(dt, J, out, *states)
    167     # Simulate for sim time, and keep track
    168     steps = int(sim_time / dt)

TypeError: step_math() takes 5 positional arguments but 6 were given

I don’t understand why this happens; meanwhile, if I use the default rates method I have this error:

~\Anaconda3\lib\site-packages\nengo\ in rates(self, x, gain, bias)
    166         J = self.current(x, gain, bias)
    167         out = np.zeros_like(J)
--> 168         self.step_math(dt=1., J=J, output=out)
    169         return out

TypeError: step_math() got an unexpected keyword argument 'output'

Maybe there is a mistake in the implementation of the step_math method or I don’t use properly the rates method.

The second step is to implement an Operator subclass: I’ve thought to use the default SimNeurons Operator in neurons\builder\ that is used for all the neurons’ models.

Finally, the third step is to implement a build function:

def build_fhn(model, FHN, neurons):

    model.sig[neurons]['voltage'] = Signal(
        np.ones(neurons.size_in) * 0.01, name="%s.voltage" % neurons) #Voltage initial condition = 0.01
    model.sig[neurons]['recovery'] = Signal(
        np.zeros(neurons.size_in), name="%s.recovery" % neurons) # Recovery initial condition = 0

If it is possible, can you help me? I can’t understand where I’m wrong.
Thank you in advance,



J is the input current and does not have defined units. Additionally, despite reading about the FHN, I’m not sure what you mean by “normalized”?


Ah ok thank you, I got confused!

“Normalized” in sense that the equations are dimensionless: the ‘v’ variable doesn’t represent the real membrane potential of the neuron (it is a voltage-like variable), meanwhile, for example, the Izhikevich neuron does. For this reason I’ve thought that the J current has an unit or an order of magnitude.


Overall, you seem to be doing everything correctly, judging by the similarity between your implementation and the existing Izhikevitch neuron implementation (neuron code, builder code). So, I’m not sure what’s going on here, but I’ll keep looking at it.


Did you manage to refine this neuron model to get it working?


No, it doesn’t still work. There will be some implementation errors in the rates method (or in the step_math), but I cannot identify them.