I am trying to implement a learning model whose membrane potential function is dependent on past spiking behavior how would I implement such a function in the step math function?

# How to implement the step math function when the membrane potential depends on the past spikes?

**arvoelke**#2

I can see that you’ve asked for help in the thread How do you create a neural model in Nengo? and received a link to the tutorial: https://www.nengo.ai/nengo/examples/usage/rectified_linear.html

At this point you might want to dig into the Nengo code for more reference. For example, here is how the Izhikevich model is implemented. Note the `recovery`

parameter used to track additional state information between successive calls to `step_math`

. In your case, such a parameter could be used to remember values such as ‘the time that each neuron last spiked’.

It may also help if you could give more detail about your model. Depending what information is needed by `step_math`

, there could be several different ways of going about this (e.g., as a custom unsupervised learning rule, or by doing something similar to the below).

**patelvrajn**#3

I now understand how this is done, would it be possible to ask a follow-up question;

- I understand that in Python that your allowed to override functions using a different number of parameters like in this situation where step_math is being overrided with a different signature. However, it was my understanding that this was bad practice in the sense it violates the Liskov substitution principle and introduces weird behaviors. Am I incorrect in my understanding? Also if this is incorrect, can I add any number of parameters to any of these functions?

**arvoelke**#4

I don’t think LSP is really violated in this situation as you can think of the list of additional parameters as a single `*states`

argument. That is, the abstraction is actually:

```
step_math(dt, J, output, *states)
```

The implementation of the `SimNeurons`

operator, which is responsible for calling the `step_math`

function, is shown below:

where `states`

is a list of `Signal`

objects. But I just realized that I forgot to mention something important in my last post (although it is mentioned in the tutorial). You can add as many states as you would like, but the builder function still needs to be overridden to create each corresponding signal. This is how the simulator knows how to call your `step_math`

function with the correct additional arguments. An example is given in the tutorial mentioned before, and shown below for the case of the Izhikevich model:

If you need something that isn’t supported by this abstraction you can even write your own operator (instead of using `SimNeurons`

) but I don’t think that will be necessary for your case.