Using the Poisson/Stochastic models with nengoDL


I’m trying to use a probabilistic SNN model, similar to what’s discussed in this paper

From the API description, I see that there are implementations for PoissonSpiking and StochasticSpiking neurons. I was trying to use the PoissonSpiking neuron but it does not seem to recognize that it exists (gives me the no attribute error). I installed nengo and nengoDL via pip and used pip install nengo[all] to make sure I got everything.

To troubleshoot I basically just copied the source code from the website for the PossionSpiking neuron and the classes it’s built on into my colab notebook. However, when I tried to create the PoissonSpiking neuron from an LIFRate neuron, it gave me the following error:

PoissonSpiking.base_type: Must be of type ‘NeuronType’ (got type ‘type’).

Is there some conflict here between different versions of code, or am I doing something wrong?


Hi @khanus, thanks for looking into Nengo!

The PoissonSpiking and StochasticSpiking classes are quite new; we added them to the development version of Nengo last week. To use the development version, you will have to have git installed, and then you can follow our developer installation instructions.

As for the other issue, the base_type argument to PoissonSpiking must be an instance of a neuron type, rather than the type itself. I am guessing that you wrote something like

neuron_type = PoissonSpiking(LIFRate)

The correct form is

neuron_type = PoissonSpiking(LIFRate())

The reason why it has to be in instance is so that you can specify arguments to the base type (LIFRate in this case). E.g., if you wanted to change the tau_rc and tau_ref parameters:

neuron_type = PoissonSpiking(LIFRate(tau_rc=0.01, tau_ref=0.001))

Hi Trevor,

Thanks for your help! That makes total sense. I got basic neuron operations with the PoissonSpiking type working, but I think now I’ve run into another issue.

I started getting the following error:

AttributeError: ‘SimNeurons’ object has no attribute ‘states’

As a sanity check, I took the spiking mnist notebook provided for the “Optimizing a spiking neural network” example and tried running it exactly as is, but with the developer version of nengo, and I was able to recreate this error there as well. The error appears when running the following block:

minibatch_size = 200
sim = nengo_dl.Simulator(net, minibatch_size=minibatch_size)

Is there some update that changes how the simulator works or something like that?

Also, I wanted to ask:
If I want to optimize my network using a different approach from the Hunsberger and Eliasmith 2016 paper, how would I go about disabling that and implementing my own within the framework? Are there any examples of people doing this? Would I be better off just using the Nengo simulator instead of NengoDL?


Ah sorry, I hadn’t read the topic title to see that you wanted to use NengoDL. Those neuron types are so new that we haven’t implemented them in NengoDL yet. We are actively working on it though (see this PR) and we expect that to be done by the end of this week. So if you can wait a bit, you will just have to do a developer installation of NengoDL and it should work.

There are a plethora of interesting models that you can do with just Nengo. Nengo allows you to use the Neural Engineering Framework, which includes a method for optimizing for connection weights offline. More optimization and training methods are possible in NengoDL, but whether you will need them depends on your specific use case. Looking at the paper you linked in the original post, a lot of those types of models are accessible through Nengo and don’t require NengoDL. They may require other things like custom learning rules that aren’t part of Nengo core itself. If you have a specific model that you’d like to reproduce, or a specific problem to solve, I and other members of the forum would likely be able to point you to the right resources.

Ah I see, that makes sense. I’ll go ahead and try it with just Nengo for now then. What I had in mind was to replicate and iterate on the model described by equations 1 and 2 of the paper I linked, and to implement algorithms 1 and 2 that are described there as well. In general I am interested in trying out new learning rules on probabilistic SNN models. My current main goal is to build a lightweight spiking autoencoder with a probabilistic SNN model, mainly for the following reason (from the paper):

“Practical implementations of the membrane potential model (1) can leverage the fact that linear filtering of binary spiking signals requires only carrying out sums while doing away with the need to compute expensive floating-point multiplications.”

Thanks so much for your help!