Hello @xchoo!
Thank you for the above information along with others. I am bit confused about the following:
TF Conv
layers can be configured with neurons e.g. ReLU. And TF AveragePool
layers just have a linear transform equivalent in Nengo as nengo.Convolution()
. As the AveragePool
layers are represented as nengo.Node()
, I am coming to a “loose” conclusion that TF Conv
layers with ReLU neurons are subdivided as two separate operations, 1st: a nengo.Node()
object to convolve learned filters (during inference phase), and 2nd: a nengo.Ensemble()
equivalent object which has spiking neurons to output on the convolved input (from the nengo.Node()
of the TF Conv
layer).
You mentioned that Conv
operations are linear (which indeed it is in the absence of following ReLU neurons - please note that I am combining ReLU neurons with convolution operation as a TF Conv
op and they need not be combined). My first question: the linear transform operation on the input is a non-identity operation, hence shouldn’t it be considered as a non-passthrough node as the linear transform operates
on the input and produces a new
output? My second question (relates to first): If the linear transform (i.e. nengo.Convolution()
) is a non-passthrough Node op (assuming I am correct in my first question), then the output is sent to the Loihi neurons from the PC to compute spikes and there will be a communication delay, if yes… what’s the order of this delay? My third question: If at all the nengo.Convolution
is a pass-through node, then AveragePooling equivalent nengo.Node()
will be a pass-through node and it will be removed from the deployable nengo.Network()
. Then where would the linear transform operation take place? It’s not a spiking based operation, it’s just an element wise multiplication followed by sum (in case of AveragePooling or nengo.Convolution()
for that matter).
With respect to the following:
I guess one is better off with all the layers being one of nengo.Ensemble()
or pass-through nengo.Node()
to leverage the power efficiency of the Loihi chip to the maximum. A small doubt pertaining to my understanding with respect to the following:
the off-chip neurons
after the Ensemble A (on-chip)
denotes a communication channel to the PC (or host) right? and similarly the on-chip neurons
after the non-passthrough node (PC)
means a communication channel to Loihi board… isn’t it? And Ensemble A/B (on-chip)
denotes the on-chip spiking computation? Please let me know.