My custom object doesn't work with nengo_dl.Simulator

I am trying to use my custom object to construct a neural network and optimize it. However, I got some problems and I decided to just copy the example of RectifiedLinear (frontend Neurons subclass, backend Operator subclass and build function) and construct the neural network using RectifiedLinear to see if it works. The error information for RectifiedLinear is exactly the same as my custom object:

Traceback (most recent call last):
  File "", line 320, in <module>
    sim = nengo_dl.Simulator(net, minibatch_size=minibatch_size)
  File "/Users/lareina/anaconda3/lib/python3.6/site-packages/nengo_dl/", line 176, in __init__
    self.minibatch_size, device, progress)
  File "/Users/lareina/anaconda3/lib/python3.6/site-packages/nengo_dl/", line 130, in __init__
    plan = planner(operators)
  File "/Users/lareina/anaconda3/lib/python3.6/site-packages/nengo_dl/", line 268, in tree_planner
    if mergeable(op, g):
  File "/Users/lareina/anaconda3/lib/python3.6/site-packages/nengo_dl/", line 47, in mergeable
    if[type(op)] !=[type(c)]:
KeyError: <class '__main__.SimRectifiedLinear'>
/Users/lareina/anaconda3/lib/python3.6/site-packages/nengo_dl/ RuntimeWarning: Simulator with model=Model: <Network (unlabeled) at 0xb3ebe9400>, dt=0.001000 was deallocated while open. Simulators should be closed manually to ensure resources are properly freed.

Could anyone tell me what is going on here? Thanks in advance.

What I am trying to do here is to use the LIF model (I want to customize it by ramdonizing the output of the neuron according to the membrane potential), and use the same structure as in However, I notice that in Nengo, "During training NengoDL will automatically be swapping the spiking nengo.LIF neuron model for the non-spiking nengo.LIFRate", so if I successfully use this custom LIF model (which is currently not working and I am asking for your help), does the network know using the non-spiking model for training and using the spiking model for testing? If not, can you give me some hints on how to add some randomness to the membrane potential of a LIF neuron? Thank you so much.

Basically NengoDL doesn’t know how to simulate your custom neuron model. When writing your neuron model you specify how to simulate it using numpy, but to run things in NengoDL you also need to specify how to simulate that neuron in TensorFlow (which is what is running NengoDL under the hood). You can see here what that looks like for a rectified linear neuron, and other examples in that file.

No, NengoDL won’t automatically know how to swap your custom neuron model between spiking and non-spiking versions. Similar to above, that is something that you would have to specify in the TensorFlow implementation of your neuron model (you can see an example of how this works for spiking rectified linear neurons here

You can add random input noise to any neuron (which will effectively add randomness to the membrane potential) using the noise parameter of an Ensemble, e.g.

nengo.Ensemble(10, 1, neuron_type=nengo.LIF(), 

Thank you for your answer, that is quite helpful. And I have another question about constructing the network, that is: what is the difference between the two following codes:

net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([100])
net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
neuron_type = nengo.LIF(amplitude=0.01)
x = nengo_dl.tensor_layer(x, neuron_type)


a=nengo.Ensemble(Nx,dimensions=Nx,neuron_type=LIF(amplitude=0.01),max_rates = nengo.dists.Choice([100]),intercepts = nengo.dists.Choice([0]))
nengo.Connection(x, a, synapse=None)

where x is the output of a nengo_dl.tensor_layer and the dimention of x is Nx.

I am quite confused about in the above cases, how these two layers are connected, are they connected densely or is one neuron in a layer only connected to another neuron in another layer and act like activation function? Also, do these neurons have the gain and bias and how are they used?

Thank you so much.

There are a couple different things you’re showing there. One is the difference between using the Config system (net.config[...), versus setting individual parameters on an object. Using the config system sets the default values for all future objects that are created in that scope, so e.g.

net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([100])
a = nengo.Ensemble(10, 1)
b = nengo.Ensemble(10, 1)

is equivalent to

a = nengo.Ensemble(10, 1, max_rates=nengo.dists.Choice([100])
b = nengo.Ensemble(10, 1, max_rates=nengo.dists.Choice([100])

The other difference is between using Ensemble/Connection versus tensor_layer. tensor_layer (when passed a Nengo neuron type) is just a shortcut for creating a Connection and Ensemble at the same time, so

a = nengo_dl.tensor_layer(x, nengo.LIF(), max_rates=nengo.dists.Choice([100]))

is exactly equivalent to

a = nengo.Ensemble(x.size_out, 1, neuron_type=nengo.LIF(), 
nengo.Connection(x, a.neurons, synapse=None)

Note that when you’re using a tensor_layer you’re connecting directly to the neurons (a.neurons), bypassing encoders/decoders, so there is no notion of dimensionality (Nx doesn’t matter).

The default behaviour is to connect each element of the output of x up one-to-one with a (acting like an activation function, as you mention). However, you could add a Dense weight matrix (or any other kind of nengo Transform) by setting the transform argument on the tensor_layer, e.g.

my_weights = np.random.randn(n_outputs, x.size_out)
a = nengo_dl.tensor_layer(x, nengo.LIF(), transform=my_weights)

(where n_outputs is the output dimensionality of your dense transform).

Yes, all the neurons will have gains (multiplied by the inputs) and biases (added to the inputs) regardless of whether they are created by nengo.Ensemble or nengo_dl.tensor_layer.