How to implement the step math function when the membrane potential depends on the past spikes?

I am trying to implement a learning model whose membrane potential function is dependent on past spiking behavior how would I implement such a function in the step math function?

I can see that youā€™ve asked for help in the thread How do you create a neural model in Nengo? and received a link to the tutorial: https://www.nengo.ai/nengo/examples/usage/rectified_linear.html

At this point you might want to dig into the Nengo code for more reference. For example, here is how the Izhikevich model is implemented. Note the recovery parameter used to track additional state information between successive calls to step_math. In your case, such a parameter could be used to remember values such as ā€˜the time that each neuron last spikedā€™.

It may also help if you could give more detail about your model. Depending what information is needed by step_math, there could be several different ways of going about this (e.g., as a custom unsupervised learning rule, or by doing something similar to the below).

1 Like

I now understand how this is done, would it be possible to ask a follow-up question;

  1. I understand that in Python that your allowed to override functions using a different number of parameters like in this situation where step_math is being overrided with a different signature. However, it was my understanding that this was bad practice in the sense it violates the Liskov substitution principle and introduces weird behaviors. Am I incorrect in my understanding? Also if this is incorrect, can I add any number of parameters to any of these functions?

I donā€™t think LSP is really violated in this situation as you can think of the list of additional parameters as a single *states argument. That is, the abstraction is actually:

step_math(dt, J, output, *states)

The implementation of the SimNeurons operator, which is responsible for calling the step_math function, is shown below:

where states is a list of Signal objects. But I just realized that I forgot to mention something important in my last post (although it is mentioned in the tutorial). You can add as many states as you would like, but the builder function still needs to be overridden to create each corresponding signal. This is how the simulator knows how to call your step_math function with the correct additional arguments. An example is given in the tutorial mentioned before, and shown below for the case of the Izhikevich model:

If you need something that isnā€™t supported by this abstraction you can even write your own operator (instead of using SimNeurons) but I donā€™t think that will be necessary for your case.

Hi @arvoelke, great explanation of the problem. Iā€™m currently working on a similar case and I probably need support. Letā€™s say I want to create a simple Node + Ensemble model that will allow me to control the gain value with a slider (from the Nengo GUI). Following your instructions step by step, I have the following elements:

  1. new neuron model
class MyNeuron(LIF):
    def step_math(self, dt, J, spiked, voltage, refractory_time, gain):
        super().step_math(dt, J*gain, spiked, voltage, refractory_time)
  1. then i create builder function to register my model with new ā€œgainā€ parameter
@Builder.register(MyNeuron)
def build_gain_model(model: Model, my_neuron, neurons):
    model.sig[neurons]['gain'] = Signal(
        np.ones(neurons.size_in), name="%s.gain" % neurons)
    model.sig[neurons]['voltage'] = Signal(
        np.zeros(neurons.size_in), name="%s.voltage" % neurons)
    model.sig[neurons]['refractory_time'] = Signal(
        np.zeros(neurons.size_in), name="%s.refractory_time" % neurons)
    model.add_op(SimNeurons(neurons=my_neuron,
                            J=model.sig[neurons]['in'],
                            output=model.sig[neurons]['out'],
                            states=[model.sig[neurons]['voltage'],
                                    model.sig[neurons]['refractory_time'],
                                    model.sig[neurons]['gain']]))
  1. and now the model itself
model = nengo.Network()
with model:
    gain_node = nengo.Node([2], label="gain_node")
    a = nengo.Ensemble(n_neurons=1, dimensions=1, neuron_type=MyNeuron(), label="main_ensemble")

    nengo.Connection(gain_node, a)

everything seems to be working, but the gain value remains unchanged throughout the simulation. How can I set it dynamically, for example with a slider from the GUI level?
I donā€™t understand at all where the changes in values of voltage and recovery come from and how to control the gain value ā€œonlineā€ :frowning: