Scaling firing rate of neurons : spike count

Hello,

I am trying to convert a Keras CNN model into a spiking one.

Following the tutorial (https://www.nengo.ai/nengo-dl/v3.3.0/examples/keras-to-snn.html), I changed the scale of the firing rate to improve the accuracy. Scaling the firing by 200 indeed results in good accuracy.

However, I observe a strange trend which is that increasing the scale of the firing rate decrease the total number of spikes (the sum of all spikes emitted at each layer) emitted by the network during inference.

How can I interpret this? Why does the network emit less spikes instead of emitting more of them as the scale increases?

Thank you for your help!

It is most likely because of how you’re measuring firing rates.

In Nengo, we allow neuron types to have an amplitude, which is a scaling value that we apply to all spikes coming out from that neuron. When applying scale_firing_rates, what we do is scale up the input to neurons by a factor of scale_firing_rates, but scale down the amplitude by a corresponding amount. This allows the neurons to fire more frequently, while keeping the time-averaged output the same (so that we don’t have to rescale the weights in the network). Because of this, if you try to measure the firing rates just by averaging the neural output over time, you’ll get about the same value no matter what you choose for scale_firing_rates.

There are two ways to work around this. One is to get the neuron amplitude for the ensemble and explicitly divide it out:

conv0_neurons = nengo_converter.layers[conv0]
with nengo_converter.net:
    conv0_probe = nengo.Probe(conv0_neurons)

# run your simulation here

conv0_rates = sim.data[conv0_probe].mean(axis=0) / conv0_neurons.ensemble.neuron_type.amplitude

The other way is to not worry about the values of the neuron output, but just count the number of timesteps where the neural output is non-zero. This works well if neurons do not fire more than once per timestep (i.e. for firing rates below 1 / sim.dt), but breaks down if they do fire more than once per timestep.

conv0_rates = (sim.data[conv0_probe] > 0).mean(axis=0) / sim.dt

For both of these examples, I’m using the same variables as the notebook you referenced (e.g. conv0 is the tensor out of the first Keras convolutional layer, nengo_converter is the nengo_dl.Converter, sim is the nengo_dl.Simulator), so hopefully things are clear.

Hello @duzzi, along with @Eric’s reply you might find this useful to understand the behaviour of scale_firing_rates in detail.

Hello @Eric, Hello @zerone,

Thank you for your answers and advices. I am still a bit confused by how I should use the scale_firing_rate. From @xchoo answer here, I don’t understand if I should multiply or divide the result by scale_firing_rate.

I use SpikingRectifiedLinear(), which can spike more than once per timestep, if I understand correctly.
My solution to compute the number of spikes per neuron across the simulation is the following:

def get_spikes_per_neurons(sim, probes, dt=0.001):
    spikes_per_layer = []
    for l in range(len(probes)):
        p = probes[l]
        spikes = sim.data[p]
        total_spikes_per_neurons = np.sum(spikes / (1 / dt), axis=0)
        spikes_per_layer.append(total_spikes_per_neurons)

    return spikes_per_layer

where probes are a list of probes of neurons.

Is my solution correct, or should I multiply the result by scale_firing_rate value?

Spikes from a neuron have scale amplitude / dt. When you’re using scale_firing_rates, then amplitude /= scale_firing_rates. So you want to multiply spikes by dt * scale_firing_rates. (Assuming the original neuron amplitude is 1, which it will be unless you’ve otherwise changed it.)

1 Like

Thank you @Eric ! It is clear now.

Hello @Eric .

My edited code is:

def get_spikes_per_neurons(sim, probes, scale_firing_rate, dt=0.001):
    spikes_per_layer = []
    for l in range(len(probes)):
        p = probes[l]

        r = recordings[p][i]
        _min,_max = r.min(), r.max()

        print("{} : [{}, {}]".format(scale_firing_rate, _min,_max))
        // Print produces:
        // 1 : [0.0, 999.9999389648438]
        // 10 : [0.0, 100.0]
        // 100 : [0.0, 10.0]
        // 1000 : [0.0, 1.0]

        spikes = sim.data[p] * dt * scale_firing_rate
        total_spikes_per_neurons = np.sum(spikes / (1 / dt), axis=0)
        spikes_per_layer.append(total_spikes_per_neurons)

    return spikes_per_layer

But I am experiencing strange behavior as total_spikes_per_neurons takes values 0.001 and 0.01 for example.

Is the original neuron amplitude 1 or 1000 (as for scale=1, max_neuron_value = 999.99)?
I am not sure to understand what the value of the neuron amplitude represents.

If max_neuron_value=1, does the neuron spike at 1Hz or 1kHz?
If max_neuron_value=2, does the neuron spikes at 2Hz or 500Hz?
And in my case, for scale=1, if max_neuron_value=999.99, what does that mean?

Thank you for your help.

Your problem is that you’re scaling by dt twice. You want total_spikes_per_neurons = np.sum(sim.data[p] * dt * scale_firing_rate, axis=0).

The neuron amplitude does not affect how often a neuron spikes, it only affects the magnitude of the spikes. Specifically, the magnitude of a spike is amplitude / dt. So if your amplitude is 1, your dt is 0.001, and your neuron is firing at 500 Hz, you’ll get an output from that neuron like [0, 1000, 0, 1000, 0, 1000, 0, 1000, ...]. Changing the amplitude will just scale that output timeseries (multiply it by whatever you set the amplitude to). The frequency of the spikes will not change.

With the converter, we use scale_firing_rates to scale the input weights to each neuron, thereby changing the firing rate, and we scale the amplitude in the opposite direction so that the sum of the output timeseries remains the same. So if you go back to that neuron at 500 Hz, if we have scale_firing_rates = 0.5, then the firing rate will get scaled to 250 Hz, and the amplitude will become twice as large. So you would get a timeseries like [0, 0, 0, 2000, 0, 0, 0, 2000, ...]. As you can see, this has the same sum (over time) as the previous output, but the neuron is firing half as often (but each spike is twice as large).