# Deriving spiking voltage and spiking frequency of a neuron from a train nengo_dl model

Hello nengo_dl community,

I hope you all are doing well.
I would like to extract parameters from a trained nengo_dl model. I can extract the weights of the model but I am more interested in the parameters like the spiking threshold of a neuron and the frequency at which it is spiking? What I mean that how can I know what is a spiking threshold of a neuron and at what frequency it is spiking? Is there any way to extract or estimate them?

Hi @Choozi,

The firing threshold of a neuron is typically defined by the bias current provided to the neuron when the network is running. The bias current is in turn determined by the bias weights, which are probably one of the values extracted as part of all of the network weights. Note that the bias weights can interact with the other connection weights to change the neuron spiking threshold, so determining the exact firing threshold may not be straightforward.

As a side note, I should clarify that for any neuron of a single type, the spiking threshold is always the same: when the input current slightly exceeds the spiking threshold current. The “difference” in spiking thresholds for neurons in a network can be attributed to the bias currents fed to the neurons when the network is in a resting (no input) state.

In regards to the spiking frequency, that information can be obtained with the use of nengo.Probes. This NengoDL example has example code on how to do this. Note that if the inputs to your networks change over time, so will the spiking frequency of the neurons in the network.

Let me clarify myself with an example. Consider we have a single LIF neuron that is taking only single input. Then we have one weight connection and one bias weight. The firing threshold is determined with the bias weight right?
Let suppose the bias weight is not interacting with other connection weights. How can I then interpret or determine the spiking threshold from bias weight of let say 0.6?

Note that the bias weights can interact with the other connection weights to change the neuron spiking threshold, so determining the exact firing threshold may not be straightforward.

Alright. Can we determine at least the range of the spiking threshold i.e the maximum spiking threshold it can acheive?

As a side note, I should clarify that for any neuron of a single type, the spiking threshold is always the same

Sorry, this is confusing me. I am using LIF neuron, does this mean the spiking threshold is predefined or what?

That is correct (i.e., the bias weight does play a role in the firing threshold), but it’s not the full answer. I got into more depth further down this post.

If you are using Nengo, and the nengo.LIF neuron (i.e., the neuron has a range of -1 to 1), and the neuron has a gain of 1, that would be equivalent to an intercept of 0.4. You can test this out with the following code:

import matplotlib.pyplot as plt
import nengo
from nengo.utils.ensemble import response_curves

with nengo.Network() as model:
ens = nengo.Ensemble(1, 1, gain=[1], bias=[0.6])

with nengo.Simulator(model) as sim:
eval_points, activities = response_curves(ens, sim)

plt.figure()
plt.plot(eval_points, activities)
plt.show()


This results in the following response curve:

Note that changing the gain of the neuron will affect the x value at which the neuron starts firing (I assume this point is what you are referring to when you say “spiking threshold”)

To understand why this is the case, we need to look at the LIF equation. In Nengo, the LIF equation is implemented as follows (a(x) is the activity of the neuron for a given input x):

a(x) = \frac{1}{\tau_{ref}-\tau_{RC}\text{ln}(1-\frac{J_{th}}{\alpha x + J_{bias}})}

From the equation, the point at which the neuron starts firing (the x intercept, or x_{int} is when the term inside the natural log goes just a bit above 0 (for values lesser than or equal to 0, this term is undefined). I.e., the neuron starts firing when 1 - \frac{J_{th}}{\alpha x + J_{bias}} > 0. Rearranging the terms, we get:

x_{int} > \frac{J_{th} - J_{bias}}{\alpha}

In Nengo, we use J_{th} = 1 for simplicity, so this equation becomes:

x_{int} > \frac{1 - J_{bias}}{\alpha}

If we substitute a gain value of 1 (i.e., \alpha = 1), and a bias weight of 0.6 (as per your question), we see that the x intercept works out to be 0.4.

What I meant by my statement is that for an LIF neuron on it’s own (no input weight, no bias weight), the spiking threshold is always when the input current just exceeds the firing threshold current (J_{th}). This value is the same for all LIF neurons under the same conditions (no input weight, no bias weight). In Nengo, heterogenous neuron response curves are generated by randomizing the neuron gains and biases.

But what I wanted for the spiking threshold question was to have a more illustrated way as explained in this thread “Spiking threshold as a parameter”. Where you can see the spiking threshold on the y-axis. Where in the thread and you also explained in the current thread that the spiking threshold is dependent on the gain and bias. However, in nengo_dl these gains and biases get updated during the training and I wanted to keep them in desired range let say… from 0 volts to 0.5 volts. One possible way I see is the regularization method percentile_l2_loss_range in this example . But is there any other way to do so?

@Choozi I think there may have been some misunderstanding on my part what you meant by “firing threshold”. What you were referring to was the threshold voltage at which point the neuron would spike. Whereas, what I thought you were referring to was the input value at the point where the neuron starts firing (we commonly refer to this as the “x-intercept”). These two are related but not the same.

If you refer to the questions from my previous post, the “firing threshold” you are referring to is just J_{th} (there is an R [resistance] term as well, but in Nengo, both R and J_{th} are 1, so the whole value is just 1). And, from my equations, what I referred to as “firing threshold” is x_{int}.

In Nengo, the firing threshold of a neuron is assumed to be 1 in all cases. Even in biology, there aren’t any neurons I’m aware of where the threshold membrane potential for the neuron ion channels can be modulated in some way. As far as I know, ion channels for specific ions have a fixed voltage at which they open causing the cascade of current that forms the spike. In the thread you linked, you’ll see that @arvoelke is not actually modifying the firing threshold of the neuron. Rather, some mathematical manipulation is being made to emulate what would happen if the firing threshold was changed (i.e., by manipulating the data rather than by creating a custom neuron type).

To summarize, in NengoDL, the neuron firing threshold (for LIF neurons) is fixed, and doesn’t change as a result of the training process.

As a caveat, I should clarify that since it’s all math (and the operations are commutative), you can technically derive the “changed” firing threshold if you are willing to fix some of the values. But, this is rather arbitrary… Using scalar numbers as an example:
It’s like having the number 18 and trying to find out how you got that number… It could have been 1 \times 18 (this would be the “fixed” threshold case), or maybe 2 \times 9 or 3 \times 6 (these would be the “modified” threshold cases). But the value you get depends on what number you keep fixed in that equation. Going back to the post that @arvoelke made, the amount of change made to the firing threshold really depends on the scaling you apply to the output of the neuron. The issue is that in a more complicated network, the scaling to the neuron output can be applied to the neuron output or to the input of the succeeding neuron. And there’s no clear way to decide how much of the weight gets applied to one or the other. This is especially true if you just have one value for the connection weight, as is typically the case with networks trained in NengoDL.

I don’t have much experience in limiting the gain and biases during the training process, but I believe that the regularization method would probably be the best way to do this.

@xchoo Thank you for taking the time and giving detailed explanations.

I think there may have been some misunderstanding on my part what you meant by “firing threshold”.

No worries. I know, I didn’t explain my question very well.

What you were referring to was the threshold voltage at which point the neuron would spike.

Exactly.

To summarize, in NengoDL, the neuron firing threshold (for LIF neurons) is fixed, and doesn’t change as a result of the training process.

Ok, got it.

Whereas, what I thought you were referring to was the input value at the point where the neuron starts firing (we commonly refer to this as the “x-intercept”). These two are related but not the same.

Ok so if I understood it correctly the input value at the point where the neuron starts firing is because for that particular input the threshold of the neuron reaches value 1? right?

As a caveat, I should clarify that since … case with networks trained in NengoDL.

Thank you for your detailed explanation. I will play with it around and will get back to you if I have more questions.

I don’t have much experience in limiting the gain and biases during the training process, but I believe that the regularization method would probably be the best way to do this.

I wanted to limit the gain and biases just to limit "threshold voltage’. With what you explained now that it is J_{th} in the above equation and it is always set to one for simplicity. So if I want the J_{th} to be not 1 but within some other range let suppose from 0 to 0.5 V. is this possible?

If you wanted to modify the J_{th} value, you’ll probably need to create and use a custom neuron type. This forum post should give you an idea of how to add a custom neuron to NengoDL.

@xchoo
Thank you got it.