Dear @xchoo Thank you for your prompt reply and thank you for answering my question so patiently. I have some confusion back in my head that I would like to clear.
I.e., 1 spike every 10 timesteps (note that the spike is 1000 in magnitude since it’s 1/dt
). If you compare the rate and spike output, you can easily come to the conclusion that the spike output is much noisier than the rate network output.
I will try to explain my understanding with the following example. If we have a rate neuron with the following rates with a time step of 1ms
Rate neuron: 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 200, 100, 100, 100 , 100 , 100 , 100 , 100 , 100 , 100 …
Assuming no refractory period, the spiking equivalent would be:
Spike neuron: 0, 0, 0, 0, 0, 0, 0, 0, 0, 1000, 0, 0, 0, 0, 0, 0, 0, 0, 1000, 0
When we are looking at the rate neuron with 100Hz. So it will generate a spike every 10ms. So in the above example, the first spike will be on 10ms. Since at the 11th time step the rate is 200Hz this means now the second spike will be on 15ms. Now at 16ms, the rate is again 100Hz so the next spike will be generated at 25 ms as shown in the figure below.
Is my understanding correct?
Now the spiking equivalent neuron. My understanding here is that 1/dt represents a threshold assuming that there is no refractory period. Right?
So the neuron will start accumulating for the 1st 9 steps as it reaches 1000 at the 10th time step It will spike. At the 11th time step, the rate is 200Hz and then again 100Hz so the next time 1000 will be achieved at the 19th time step, and hence it will spike. Then spike again on 29th step and so on…
This is what my understanding is about rate neurons and spiking neurons.
In NengoDL, however, since TensorFlow is used to train these networks, and since the gains and connection weights (encoders + decoders) are combined to form the TensorFlow equivalents before training, after training, it is possible that these individual components (encoders, decoders, gains) become inseparable from the overall connections weight value.
So this means that we can represent the behavior of the neuron (with constant refractory and RC) by the connection weights and bias only? What I mean in this case I would only need the biases and weight connection to map it on FPGA.
One another question is about the training. When we are training in nengo_dl. During the training process is the neuron behavior like a conventional artificial neural network neuron? or is it a rate-based neuron?
Thank you very much for your patience and for answering my questions.