Spiking CNN Example in Loihi Questions

Hi Nengo team!

I have been poring over the nengo_loihi documentation lately, as well as the examples (namely Convolutional networks). I am hoping to get a better understanding of how the conv_layer function works and other related aspects of spiking CNNs on Loihi. The code looks so simple, but I’ve been struggling to grasp everything that is happening in a pseudocode/layman’s terms type of way. Specifically, my questions are:

  • How does layer = nengo.Ensemble(conv.output_shape.size, 1).neurons implement the activation function? (And what activation function is it?)
  • What is happening in the bolded section of the first convolution layer in the for loop below?

layer, conv = conv_layer(inp, 1, input_shape, kernel_size=(1,1), init=np.ones((1,1,1,1)))

  • np.ones((1,1,1,1)) == array([[[[1.]]]]) Is this the actual kernel that’s convolving (correlating) over the image, one pixel at a time, from inp to an ensemble of equivalent size?
  • Any specific reason for choosing 1, 6, and 24 filters for each layer in the spiking CNN? Or did they just give good results for this MNIST classification example?
  • Can pooling (avg./max) be implemented on Loihi, and if so, how? I noticed in nengo_extras that there is a ConvLayer and PoolLayer class, but those seem to be outdated/relevant to Keras/incompatible with Loihi?
  • Finally, I found 2 discrepancies between the MNIST dataset provided by urlretrieve('http://deeplearning.net/data/mnist/mnist.pkl.gz', 'mnist.pkl.gz') and the one provided by keras.datasets.mnist.load_data(). The pickle file seems to only contain 50000 training examples, rather than 60000, and the maximum pixel value from the pickle file is not 1.0, but is actually 0.99609375. This is not an important question, but I was just wondering why the datasets are different?

I believe those are all the questions I have for now. I hope I’m not asking too much of the Neng’ gang. :see_no_evil:

Thanks in advance, and happy Friday!

Hi @yadams! I’m not well versed in the convolution stuff, but I’ll try my best to answer.

The activation function is part of the Ensemble, specifically its neuron_type parameter, which defaults to nengo.LIF, the leaky integrate-and-fire activation function. It’s common in spiking CNNs to switch this to nengo.SpikingRectifiedLinear, but the right choice depends on lots of factors.

The init argument sets the kernel for the convolution. You can find some details in the Convolution docs.

I haven’t looked at the example closely, but that would be my assumption yes.


I’ll have to leave the rest of the questions to @Eric or @drasmuss, who know that example better. Hopefully the pointers to the docs were helpful!

No specific reason, as you guessed they just worked well on that example (although we didn’t do an extensive hyperparameter search).

Average pooling can definitely be implemented on Loihi, since it’s just a linear operation (you’re basically adding some pool of inputs together with weight 1/n, where n is the number of items in the pool).

I don’t think there is any built-in max pooling support in Loihi. So you’d either need to do that part of the computation off-chip (e.g., using a nengo Node), or write a custom snip to implement the max pooling operation.

The keras dataset includes the 10000 validation examples in the training data, the other dataset keeps them separate (so 50000 training, 10000 validation, 10000 test). I’m not sure about the difference in the maximum pixel values, but if I had to guess the keras dataset is normalized per-set (so training and test both normalized independently), whereas the other dataset is normalized as a whole (so the training dataset doesn’t contain the maximum-valued pixel in the whole dataset).

Thank you for the informative replies, @tbekolay and @drasmuss. How would I go about implementing this?

I’ve adapted some code from this StackOverflow question, and you can see it below.

def pooling(mat, ksize, method=‘avg’, pad=False, channels_last=False):
‘’'Non-overlapping pooling on 2D or 3D data.

<mat>: ndarray, input array to pool.
<ksize>: tuple of 2, kernel size in (ky, kx).
<method>: str, 'max for max-pooling, 
               'mean' for mean-pooling.
<pad>: bool, pad <mat> or not. If no pad, output has size
       n//f, n being <mat> size, f being kernel size.
       if pad, output has size ceil(n/f).

Return <result>: pooled matrix.
'''

if channels_last:
    m, n = mat.shape[:2]
else:
    m, n = mat.shape[1:]

ky, kx = ksize

_ceil = lambda x,y: int(numpy.ceil(x / float(y)))

if pad:
    ny = _ceil(m, ky)
    nx = _ceil(n, kx)
    size = (ny*ky, nx*kx) + mat.shape[2:]
    mat_pad = numpy.full(size, numpy.nan)
    mat_pad[:m,:n,...] = mat
else:
    ny = m // ky
    nx = n // kx
    mat_pad = mat[:ny*ky, :nx*kx, ...]

new_shape=(ny, ky, nx, kx)+mat.shape[2:]

if method == 'max':
    result = numpy.nanmax(mat_pad.reshape(new_shape), axis=(1,3))
elif method == 'avg':
    result = numpy.nanmean(mat_pad.reshape(new_shape), axis=(1,3))

return result

I suspect that I won’t be able to drop this in between convolutional layers as is. Would I need to make an AvgPooling class that looks similar to Convolution? I’m not sure how to move forward from here.

Thanks for the help!

If you want it to work on Loihi, then you’d need to be writing a custom implementation using Intel’s codebase (NxSDK), and you’d have to consult their documentation for how to go about doing that. But if you don’t need it to run completely on Loihi, I’d start by just creating a nengo.Node that implements basically what you have above. Nodes let you execute arbitrary Python code. You’d be able to use that with Nengo Loihi, but the code itself wouldn’t be running on-chip. It’d be calling out to Python to run the code you defined and then sending the result back to the board to run through the other parts of the network you define.

1 Like

Personally, I never use average pooling anymore, I just use strided convolution. It’s essentially the same thing: https://arxiv.org/abs/1412.6806

As for max pooling, it’s not straightforward to implement in spikes. There are some implementations (see section 2.2.6 of this paper: https://www.frontiersin.org/articles/10.3389/fnins.2017.00682/full), but it’s significant added complexity for what I believe is marginal benefit. (You need to take filtered versions of the neuron outputs to determine which one has the max rate, and then use a gating function to ensure only that neuron’s spikes get through. I believe this will add more settling time/latency to the network. It’s certainly not straightforward to implement on Loihi.)

1 Like

Would you directly replace pooling layers with strided convolutional layers (w/o nonlinearity? i.e. layer = nengo.Node(size_in=conv.output_shape.size)? Or integrate the strides in the main convolutional layers? Or some other way?

I’m hoping my questioning is on topic w.r.t Nengo/CNNs, but a generalized answer helps, too.

I appreciate everyones’ inputs. Thank you!

I tend to think of pooling layers grouped with convolutions and nonlinearities into a “generalized convolutional layer”, consisting of convolution->nonlinearity->pooling. In this case, you can replace such a generalized convolutional layer with a strided convolution + nonlinearity.

However, it might be more precise to look for the places where pooling layers are followed directly by a convolution, since this is two linear transforms in a row which can then be replaced with strided convolution. So convolution->nonlinearity->pooling->convolution->nonlinearity would become convolution->nonlinearity->strided convolution->nonlinearity.

Do you have a particular network architecture you’re working with?

For the moment, I have just been working with the example CNN using nengo.Convolution and a few small modifications I’ve made.

I noticed that it uses strided convolutional layers in

layer, conv = conv_layer(layer, 6, conv.output_shape,
                         strides=(2, 2))
layer, conv = conv_layer(layer, 24, conv.output_shape,
                         strides=(2, 2))

Ideally, I want to make a few small SNNs that are as close to CNNs I’ve made in Keras/Tensorflow, but with the added bonus of working entirely on the Loihi chip. In the end, I’d like to compare power consumption of the Loihi chip with another chip, such as Intel’s Movidius VPU.

Tangentially, I’d like to see how some state of the art CNNs, or rather their unique components (e.g. ResNet skip connections) would be implemented in Nengo.

If you’re not tied to a particular architecture, I’ve had success with networks based on the All-Conv architecture. There are Keras implementations:

This is what I used as a base for my CIFAR-10 net on Loihi:

It should hopefully be fairly straightforward to add in ResNet-like skip connections. Let us know if you run into any issues, or have any questions.

1 Like

Thank you for these resources!

I have another question about the CNN example on Loihi.

I only changed the dataset (MNIST → Fashion MNIST), n_epochs (5 → 10), and do_training (False → True). Everything else has been kept the same. When I run this cell,

n_presentations = 50
with nengo_loihi.Simulator(net, dt=dt, precompute=False) as sim:
# if running on Loihi, increase the max input spikes per step
if ‘loihi’ in sim.sims:
sim.sims[‘loihi’].snip_max_spikes_per_step = 120

# run the simulation on Loihi
sim.run(n_presentations * presentation_time)

# check classification error
step = int(presentation_time / dt)
output = sim.data[out_p_filt][step - 1::step]
correct = 100 * (np.mean(
    np.argmax(output, axis=-1)
    != np.argmax(test_data[out_p_filt][:n_presentations, -1],
                 axis=-1)
))
print("loihi error: %.2f%%" % correct)

I’m given a BuildError. The entire traceback is

BuildError Traceback (most recent call last)
in
1 n_presentations = 50
----> 2 with nengo_loihi.Simulator(net, dt=dt, precompute=False) as sim:
3 # if running on Loihi, increase the max input spikes per step
4 if ‘loihi’ in sim.sims:
5 sim.sims[‘loihi’].snip_max_spikes_per_step = 120

C:\ProgramData\Anaconda3\envs\nengo3point0\lib\site-packages\nengo_loihi\simulator.py in init(self, network, dt, seed, model, precompute, target, progress_bar, remove_passthrough, hardware_options)
209
210 if target in (“simreal”, “sim”):
→ 211 self.sims[“emulator”] = EmulatorInterface(self.model, seed=seed)
212 elif target == ‘loihi’:
213 assert HAS_NXSDK, “Must have NxSDK installed to use Loihi hardware”

C:\ProgramData\Anaconda3\envs\nengo3point0\lib\site-packages\nengo_loihi\emulator\interface.py in init(self, model, seed)
40 def init(self, model, seed=None):
41 self.closed = True
—> 42 validate_model(model)
43
44 if seed is None:

C:\ProgramData\Anaconda3\envs\nengo3point0\lib\site-packages\nengo_loihi\validate.py in validate_model(model)
12
13 for block in model.blocks:
—> 14 validate_block(block)
15
16

C:\ProgramData\Anaconda3\envs\nengo3point0\lib\site-packages\nengo_loihi\validate.py in validate_block(block)
17 def validate_block(block):
18 # – Compartment
—> 19 validate_compartment(block.compartment)
20
21 # – Axons

C:\ProgramData\Anaconda3\envs\nengo3point0\lib\site-packages\nengo_loihi\validate.py in validate_compartment(comp)
54 if comp.n_compartments > N_MAX_COMPARTMENTS:
55 raise BuildError(“Number of compartments (%d) exceeded max (%d)” %
—> 56 (comp.n_compartments, N_MAX_COMPARTMENTS))
57
58

BuildError: Number of compartments (1352) exceeded max (1024)

I am running this in an Anaconda virtual environment with

  • nengo_loihi.__version__ == 0.8.0
  • nengo_dl.__version == 2.2.0
  • nengo.__version == 3.0.0.dev0
  • np.__version__ == 1.16.4
  • tf.__version__ == 1.14.0

I don’t have NxSDK installed on my machine.

Could you explain this error and how to handle it?

Thank you!

Edit: After following the traceback a little bit, I found that it has to do with some obfuscated aspect of NxSDK in nengo_loihi.validate.py, so I can’t necessarily see what part of my model corresponds with the number of compartments.

The “number of compartments exceeded max” error generally happens when an ensemble is too big (i.e., has too many neurons than will fit on a chip). Nengo Loihi doesn’t currently have the ability to split an ensemble across multiple chips, though we are working on that. If you’re using the emulator (not real hardware), I think you can comment out the line that’s giving you the build error and it should work. Otherwise, you’ll have to lower the number of neurons in your ensembles, or restructure the network.

If I had to guess, the ensemble size depends on the shape of the data in the dataset. I haven’t looked at the code, but it is probably possible to change the number of neurons per ensemble dimension as a way to get it to fit (though that will definitely affect performance).

Hi @Eric
I read your source code but I don’t see where you use RestNet-like skip connection. Can you tell me, please?

I don’t use ResNet-like skip connections in that code. I was saying that it should be straightforward to add them.

Also, that code is now quite out-of-date. Here’s an updated version of the example: CIFAR-10 convolutional network — NengoLoihi 1.1.0.dev0 docs