[Nengo-DL] General Questions - Q1

Hello there!

I have been getting in depth of Nengo-DL models for past few months and have quite a number of general questions about it as I progress. In this topic I intend to ask them, but will go one by one, so as not to dilute the discussion. I will keep updating the list of questions here so that others might find it useful for a quick look. Note: My Nengo-DL model uses SpikingRectifiedLinear() neurons.

Q1. In a Nengo-DL model (created from model trained in TF - ReLU neurons), I have set the scale_firing_rates = 200. After inferencing for one batch, when I looked at the output of sim.data[ens0].max_rates where ens0 is first Conv layer ensemble, I see the it outputs an array of [200, 200, ..., 200] (if I don’t scale the firing rates, it’s [1, 1, ..., 1]). So I am assuming that max_rates (i.e. the max firing rates) of all neurons is set to 200. Upon explicitly looking for the firing rates of every neuron for an individual test input, which I calculated by taking the sum of spike array (an array of 1s and 0s) and dividing it by n_steps i.e. time-steps in ms, I observed that the firing rate of few neurons were larger than 200. How come a neuron can spike at a higher firing rate than max_rates? I suppose that a neuron (without any refractory period) may spike every millisecond, so it can reach a firing rate of 1000Hz… is it? Please correct my understanding if I am wrong anywhere.

Thanks!

There are two different (somewhat interlinked) parameters being discussed in your question: max_rates and scale_firing_rates. I’ll discuss both of them separately, and then try to link them together.

scale_firing_rates
The scale_firing_rates parameter come as somewhat of a necessity when switching from rate neurons to spiking neurons. Consider a model where you have a neuron trained in such a way that for an input value of 1, the neuron’s firing rate is 1 Hz. In this scenario, if the neuron is a rate neuron, at every timestep in the simulation, the instantaneous firing rate of the neuron is available. I.e., if the input to the neuron is 1, then for every dt the output of the neuron is 1 Hz (or 1). However, if the neuron is a spiking neuron, for that same input (of 1), the spiking neuron would only spike once per second, meaning that the model would have no output (i.e, produces no information) for 999 of 1000 timesteps. There are multiple ways of mitigating this issue: which are discussed in this NengoDL example.

The scale_firing_rate parameter aims to address this rate-to-spiking effect by increasing the response of the neurons to a fixed input value. To do this, NengoDL applies the scale_firing_rate value to the input weights of the neurons, which makes them spike at a faster rate. Just so that the scale_firing_rate doesn’t affect the overall function computed by the network, the inverse of the scale_firing_rate value is applied to the output of the neuron. This behaviour is described in the documentation of the NengoDL converter, here.

It should be noted that scale_firing_rate isn’t always linear. I.e., a scale_firing_rate value of 100 doesn’t always increase the neuron’s output firing rate by 100 times the original. Rather, the linearity of the scale_firing_rate behaviour is dependent on the linearity of the neuron’s activation function. For neurons that have a linear activation (e.g., ReLU neurons), the scale_firing_rate parameter does scale the firing rate linearly. However, this is not true for neurons that do not have a linear activation (e.g., LIF neurons).

max_rates
The description of the max_rates attribute can be found in the Nengo documentation here. While the attribute name sort of implies that it determines the maximum firing rate of a neuron, this isn’t actually the case. Because neurons can have an infinite maximum firing rate (e.g., the ReLU neuron’s maximum firing rate is only limited by the simulation dt), Nengo and NengoDL defines the neuron’s max_rates to be when the neuron is provided a specific input value. By default, the input value is 1, so if a neuron’s max_rates is 200, it means that when the neuron’s input is 1, the neuron’s firing rate will be 200Hz.

The logical continuation of the max_rates definition is that if a neuron is provided an input that exceeds 1, the neuron will be able to fire faster than the max_rates value. For neurons with linear activation functions, you’ll be able to increase the neuron’s firing rate all the way up to 1 / dt Hz, assuming the neuron has no refractory period.

The relation between scale_firing_rate and max_rates
You rightly pointed out that there’s a relation between the value of scale_firing_rate and the value of max_rates. In fact, it seems to be that max_rates == scale_firing_rate. In the discussion about scale_firing_rate, it was mentioned that the purpose of the scale_firing_rate parameter is to increase the firing rate of a neuron w.r.t. to some input value. NengoDL achieves this by changing the neuron’s gain attribute, and, for a ReLU neuron, this effectively increases the “slope” of the neuron’s tuning curve. This is why, for the ReLU neuron, changing the scale_firing_rate parameter affects the max_rates value. The exact calculation for max_rates for the ReLU neuron can be found here.

1 Like

Thank you @xchoo for such a detailed and informative answer. About the relation of max_rates and scale_firing_rates, following is my overall understanding. Default max_rates denote the default firing rate of a neuron for an input of 1. Now, the ReLU neurons will by definition have an output of 1 for an input of 1, thus its default max_rates = 1 (and that’s why I see an array of [1, 1, 1, ..., 1] when I don’t apply scale_firing_rates). Next, as you said:

thus, when I apply a scale_firing_rate = 200, the max_rates output of SpikingRectifiedLinear neurons (inheriting from RectifiedLinear as in code) scales to [200, 200, 200, ..., 200] for an input of 1 - and behind the scenes it is due to the scaling of input weights to this SpikingRectifiedLinear neurons (i.e. it scales the input 1 to 200 - assuming input weight is 1, thus an output of 200). However, going with the following:

if the “scaled” neuron receives an input of more than 1, it certainly is going to fire faster than 200Hz. This resolves my doubt. I hope I was correct in my understanding.

Two small follow up questions:
1> Just confirming, seems like max_rates increases linearly with input for linear neurons (e.g. ReLU), right?
2> I understand that multiplying the input weights of neuron by scale_firing_rate and dividing its output by scale_firing_rates helps maintain the same behaviour of neuron (when no scaling is done), how come this affects the overall performance of the network? I mean we receive the same output of a neuron irrespective of scaling. I guess, the expected output is received earlier in time due to scaling… right? (thus helping in lowering down the simulation time).

Yes, this is correct. Although, I should clarify that in Nengo, the “input weights” of the neurons consists of several components: any encoders that are used (only for NEF based ensembles, doesn’t apply to converter based networks), the connection transformation, and the neuron’s gain. What is being modified by the scale_firing_rates parameter is just the neuron’s gain value, so, if you were to just probe the connection weights, you will not see the change reflected there.

This is correct. Since the ReLU neuron has a linear activation function, a linear increase in the neuron’s gain will result in a linear increase in the neuron’s max_rates. This does not hold true for neurons with non-linear activation functions.

A rate network example
The scale_firing_rate parameter changes the performance of the network because of the temporal nature of spikes. To explain this, let us consider a simple two neuron, two layer network, with the weights indicated in parenthesis:

inp --(*1)--> A --(*2)--> B ---> output

First, let us consider the network with rate neurons. If we set the inp to 1, neuron A will recieve an input of 1, and thus it’s output firing rate will be 1. This value (1) is then fed through the connection from A to B, resulting in the input to neuron B to be 2. Given an input of 2, the output firing rate of neuron B would be 2, resulting in the network output of 2.

Before we move on to the spiking neurons, let us now run the rate network over time, using the same constant input of 1. What we expect the outputs of each layer to be at each timestep would be (note: this assumes no propagation delay to simplify the example):

t:   dt 2dt 3dt ... ndt
inp:  1  1  1        1
A:    1  1  1        1
B:    2  2  2        2

Now, because the network is running over time, we need some way of evaluating the network output. One method for doing this is to average the network output over a window of time. Another method for evaluating the network output is to simply choose a specific time t (e.g., the last timestep of the simulation) to read the output. For the rate neurons, both of these methods produce identical results.

The spiking network example
Now, let us convert the neurons to spiking neurons. The neuron activation logic for the SpikingRectifiedLinear neuron can be found here, and I’ll be using it as a guide to demonstrate the behaviour of the spiking model. Note that the neuron activation logic allows for multiple spikes to appear in one timestep.

So, let’s run through the simulation. For an input of 1, neuron A should have a firing rate of 1Hz. This means that it should spike every 1/dt timesteps. For this example, lets take dt to be 0.1s, so it should spike every 10 timesteps:

t:   dt 2dt 3dt ... 9dt 10dt 11dt
inp:  1  1  1        1    1    1
A:    0  0  0        0   10    0

From the output above, several things are evident:

  • The neuron only outputs a value when it spikes. At every other time, the output is 0.
  • The output spike has a value of 1/dt, this is to ensure that the total “energy” of the spiking neuron averaged over time is equivalent to the expected output value of 1 (for an input of 1). I.e., if you take that 1/dt spike, and average it over 1/dt timesteps, you’ll get the same output as the rate neuron.

Now comes the interesting part, what happens to neuron B? At t = 10dt, the input to B will be 20, and if we put that through the SpikingRectifiedLinear activation function, it will produce two spikes for that timestep. Assuming no propagation delay, the output of the network will look like this:

t:   dt 2dt 3dt ... 9dt 10dt 11dt
inp:  1  1  1        1    1    1
A:    0  0  0        0   10    0
B:    0  0  0        0   20    0  

Now, let’s try to evaluate the network output. If we were to average over a 10dt window, we’ll get the “correct” output of 2. However, if were to choose an arbitrary point in the simulation to evaluate the output, we will see that 9 times out of 10, the network output produces a 0, which is completely incorrect!

The takeaway from this very simplistic example is that for spiking networks, information is only transmitted from layer to layer, and from layer to output when a spike happens. For more complex networks this can also have the effect of reducing the overall network accuracy because spikes may have to arrive at the same time for a neuron to produce an output spike.

The spiking network with scale_firing_rates
So, how does the scale_firing_rates parameter address this issue? For this example, let’s take the extreme method of increasing the scale_firing_rates parameter to 10. As before, the input to neuron A is 1, but, with a gain of 10, now the neuron’s expected firing rate would be 10Hz (i.e., once every timestep). To compensate for the increased firing rate, we divide the amplitude of the output spike by scale_firing_rates to keep the neuron’s total “energy” the same. Since a spike has a default amplitude of 1/dt, the new spike amplitude would be 1/(10*dt) = 1 This results in the following output:

t:   dt 2dt 3dt ... 9dt 10dt 11dt
inp:  1  1  1        1    1    1
A:    1  1  1        1    1    1

Now, let’s look at neuron B. At each timestep, the neuron receives and input value of 2. With the scale_firing_rates value of 10, this increases the neuron input to 20. What this means is that the expected neuron firing rate is 20Hz, i.e., twice every timestep, or rather, the output should be 20 every timestep. As with neuron A, we divide the amplitude of the spikes by scale_firing_rates which result in an output of 2 every timestep:

t:   dt 2dt 3dt ... 9dt 10dt 11dt
inp:  1  1  1        1    1    1
A:    1  1  1        1    1    1
B:    2  2  2        2    2    2

Looking back at the rate network, we recognize this as the output of the rate network! Thus, by increasing the scale_firing_rates parameter, we’ve essentially replicated the behaviour of the rate network.

Of course, increasing the scale_firing_rates parameter to such a large value is the extreme, and in general, we don’t do this because we lose the advantages of spikes, which is that information is only transmitted (i.e., energy is used) when a spike is emitted. You can perform the same exercise with a lower value of scale_firing_rates, and see the impact it has. As a quick example, let’s set scale_firing_rates = 5. What you should see is this:

t:   dt 2dt 3dt ... 9dt 10dt 11dt
inp:  1  1  1        1    1    1
A:    0  2  0        0    2    0
B:    0  4  0        0    4    0

Here, the network is only producing spikes every other timestep (i.e., energy is about half of the scale_firing_rates=10 network), and the output value (4) is closer to the expected output (2) than the original scale_firing_rates=1 spiking network. As in the Keras-to-SNN example, synaptic filtering can be added to smooth out the spikes to produce a more desirable output.

The quick summary of the effect of the scale_firing_rates parameter is that it increases the spiking rate of the neurons in the network, thereby increasing the rate of information flow (recall, information only flow when a spike occurs) through the network, increasing it’s accuracy and performance.

Caveat: I should mention that what I’ve explained above is super simplified to give you a general idea of how converting spiking neurons can impact the network’s performance. In reality, because of the dynamics of the network (I didn’t even include synaptic filtering or propagation delay in my example), things can get a lot more complicated. :smiley:

1 Like

Thanks once again @xchoo for sparing some time to explain this in great details! Really appreciate it. BTW, perhaps these are few typos:

Shouldn’t this be divide amplitude by scale_firing_rates instead of 1/scale_firing_rates? Also,

it should be 2 instead of 20 every time step. Right?

Biologically, the potential of a neuron builds over continuous time and if reaches a threshold, we say that an action potential is fired, which we generally discretize as a spike in one time step. Please note that it’s an i.e. one action potential, and same is the story with LIF neurons too, where we immediately lower the potential of LIF neuron to 0 in the same time step when it fires a (i.e. one) spike. So I am finding it bit difficult to digest that a neuron can fire 2 spikes in a time step. Is it because we are using SpikingRectifiedLinear neuron and it has this liberty (as in it’s code implementation) to fire 2 spikes in one time step?

Ah, yes. That is correct. The amplitude should be divided by scale_firing_rate, not its reciprocal.

This sentence is correct. The neuron should spike twice every timestep (i.e., 20 per timestep), but this is before the scale_firing_rates scaling reduction is taken into account. With the scale_firing_rates scaling reduction (i.e., divided by 10), the result is a value of 2 every timestep.

As I mentioned in my reply, I was using the activation logic that is specified in the SpikingRectifiedLinear neuron code. Firing twice in one timestep can be understood as the neuron having no refractory period. Or, it can also be understood as the neuron firing twice within the specific time frame that 1 timestep represents. In the example I posted, dt = 0.1s, and it’s entirely possible for even a biological neuron to fire twice within 1/10th of a second, which would be recorded as being 2 spikes within the one timestep.

Got it @xchoo! Thank you for you explanations.