Implementing Regression Model

I have followed your method ,
" multiply by the dt (which is 0.001 in your case, unless you changed it during simulation). This is done in scaled_data *= 0.001 line. This line rates = np.sum(scaled_data, axis=0) / (n_steps * nengo_sim.dt) calculates the firing rate from the spikes scaled_data "

I extracted this scaled data which is total of 100 neurons.

I plotted those neurons. PFA
You see the 17, 18 the neuron is max activated rest all are 0. similarly I plotted for all 100 neurons. Now I am a bit confused.
I want to know this neuron is activated for which target?

Hello @ssp, not all the neurons are supposed to spike for any random test image. Few of them in a layer will spike, few won’t. So when you are choosing 100 neurons, you aren’t guaranteed to see firing rate for all the neurons. I am also a bit confused about how you are calculating the percentage firing rate? Firing rates are in Hz, and they generally don’t have an upper bound for the SpikingRectifiedLinear() neuron (as they can spike more than once in a timestep). Therefore it make more sense to plot them in raw values i.e. in Hz.

I am attaching a minimal script where you can see how I have plotted the firing rates of 100 random neurons (where each of them have spiked for the first input test image). Once you choose test_image_index (in the attached jupyter notebook) you can set the target image and identify which neurons have spiked for it.

I would highly encourage you to tinker through the attached notebook, and try experimenting with it. Feel free to ask any doubts.
visualizing_firing_rates.ipynb (69.4 KB)

Lots of thank you. This is my model
I have saved the parameter sim.fit(
inTrainNgo, {nengo_output: outTrainNgo},
validation_data=(inputValidationNgo, outputValidationNgo),
epochs=100,
)
sim.save_params("./nengo-model_trial")

image
Now when I want extract the hidden layer neurons,
from numpy import load

data = load(‘nengo-model_trial.npz’)
lst = data.files
for item in lst:
print(item)
print(len(data[item]))
I got the result like this,
arr_0
100
arr_1
50
arr_2
100
arr_3
250

I want only the hidden layer i.e. arr_2
I want to extract the hidden layer neurons from my entire sequential model.
lets say for each input data, there is one or 2 hidden layers and 1 output layer, right?
So i want to check the neurons from each hidden layer

I have shared a sample notebook. I know you must be really busy and sorry to bother you again but Please check it once.
test_simulation.ipynb (11.4 KB)

Hello there,

Unfortunately your uploaded notebook was buggy and I wasn’t able to quite follow what you are trying to do. I see that you have defined simple sequential model, followed by training with TF-Keras and then creating another (random or related?) model in NengoDL and simulating it without setting any training options (e.g. optimizer, loss, etc.).

I have to ask, what are you trying to achieve with NengoDL and which NengoDL tutorial are you referencing from?

With respect to your questions above, sim.save_params("./nengo-model_trial") saves the model’s trained parameters. It has got nothing to do with the neurons activations. When you load the trained parameters back i.e. data = load(‘nengo-model_trial.npz’), it just gets trained weights and prints the models (some) parameters which I am not aware of. But, it definitely isn’t the neuron activations if that’s what you are looking for.

I have followed this method (Converting a Keras model to a spiking neural network — NengoDL 3.4.1.dev0 docs)
Moreover, I want to create a spiking neural network for a multiclass regression task and then I want to observe how the spiking neurons are behaving for each class compared to basic neurons.

Then you are on the right track. Please follow the code I posted here or the ones mentioned in the tutorial you are following, and try to adapt it to your use case. You will be able to visualize the firing rate of neurons.

Thank you so much sir. I am able to visualize the neurons now. Your references really helped me a lot. Once again thank you so much

Glad that my references helped! You’re welcome :slight_smile: !

I was going through this discussion of yours ([Nengo DL]: Understanding the internals of Nengo-DL ; Theory and Papers) and I have some fundamental doubts on how spiking networks are learning,
Let’s say I have a basic 3 layer architecture
How the nengoDL converter is learning and predicting for a regression or classification task? is the network is using backpropagation seen in traditional NN or (Hebb’s law, Spike Timing Dependent Plasticity rule)

Is there any biological significance of using the spikingRELU activation function, why not spikingTanh?

I understood the LIF method for the firing of neurons but still not sure how they are helping in prediction.

So far I have mainly worked with ANN-to-SNN conversion paradigm to create spiking networks. Within this paradigm, you first train a traditional ANN with the usually preferred ReLU neurons and then convert it to a spiking one by replacing the traditional ReLU with a spiking version of it (apart from other network modifications). This replacement process is facilitated by nengo_dl.Converter(). Thus no actual learning is happening in the NengoDL Converter. This stands sort of “true” also when you train a network using NengoDL APIs; and that’s because while training, NengoDL still uses TF under the hood to train its network. Note that in both the cases BackPropagation is used, no STDP etc. rules. Although, as you may know there are other methods e.g. Surrogate Gradient Descent which attempt training with spikes.

There isn’t much biological significance of using SpikingRectifiedLinear() than there is to using the LIF() neurons. Since you train on ReLU neurons, you use SpikingRectifiedLinear() neurons (an Integrate and Fire neuron model) as they generalize better to ReLU (compared to LIF()). Although this may depend upon experimental factors.

I did not get this part though:

Can you explain what you are looking for here?

Thanks for responding. As we are using the LIF parameter, I thought there must be some biological significance of creating these spiking neurons using the LIF parameter.

Theoretically, biological membranes are not ideal capacitors which means the membrane voltage is slowly leaking over time by reducing it to its rest. So I want to know when we add some LIF values in the model, what’s its role in the SNN model?

So sorry for asking such silly questions but As we are converting rate based neuron (eg. continuous analog signals) to spiking Neurons (binary spike inputs).
The time steps we mention based on that SNN process the formation of a random number, whose value is compared to the magnitude of the corresponding input based the magnitude of the values the spikes will generate (just like in biological neurons based on the summation of transmitters the neuron will activate or go to neutral position).

Now consider the digit classification example,
i want to know when we generate these spikes how that spike knows that the number is 9 or 1
is it the calculating the distance between input (trained spikes) and output or some kind of similarity majors to find the accurate numbers?

When you mention LIF parameter, or LIF values, I am assuming you mean LIF neurons. As you know, LIF neuron is a spiking neuron => it produces a spike (of a certain amplitude) whenever its membrane potential cross the threshold. And, that is effectively what its role is in an SNN model. We have LIF/SpikingRectifiedLinear neurons in place of ReLU neurons, and their spiking activity when smoothed out, represents a value (actually the expected activation value of the rate based ReLU neuron).

I am unable to get what you mean by above. Please elaborate.

For digit classification example, the individual spikes don’t carry any such high level information. Rather, they are supposed to represent a real (analog) value when a bunch of them (in temporal order) are smoothed out. Consider a very small and shallow Neural Network of just 2 neurons - an input neuron and an output neuron, both ReLU and connected by a connection weight of 2.

Now, let a value be input to the first neuron, say 2.34, thus the second neuron should output a value of 2.34 x 2 = 4.68 (as 2 is the connection weight). Now, when we replace those ReLU neurons by a spiking neuron, the first spiking neuron should represent the value 2.34 (or closer to it, i.e. oscillate around it) upon being input the same value. Similarly, the second spiking neuron should represent 4.68 (i.e. oscillate around it, since spiking neuron’s outputs are inherently noisy, they cannot represent a fixed value). Representing those values via the output spikes of the neurons is done by synapsing/filtering those spikes. Now extend this understanding to the entirety of a spiking network. Each spiking neuron in an SNN just tries to approximate the actual real valued activation of corresponding rate neuron (had the network been run with those rate neurons in TF/Pytorch). You may want to watch this tutorial, first 20 to 30 mins should give you a basic understanding. Also, watch Nengo Summer School videos for a better understanding of spiking networks.

Hi, may I know how you solved this error? “Cannot call compile after simulator is closed” Any help is appreciated.

Hi @kkkkking,

Without seeing the actual code, it’s hard to tell exactly what is causing this error. However, if I were to guess, it is because you are trying to call the sim.compile function after the nengo_dl.Simulator context box. For example, this will work:

with nengo_dl.Simulator(model, ...) as sim:
    sim.compile(...)

but this will fail with your error:

with nengo_dl.Simulator(model, ...) as sim:
    ...
sim.compile(...)

In the first instance, the sim.compile call is within the sim context block (see the tab spacing). In the second instance, the sim.compile call is outside the sim context block (once again, see the tab spacing). In Nengo and NengoDL, most sim calls (apart from sim.data) is required to be within the sim context block.

Note, this should also work:

with nengo_dl.Simulator(model, ...) as sim:
    ...  # some code

...  # some more code

with sim:
    sim.compile(...)