Lateral inhibition within Ensemble

I’m currently working on simulating a memristor that shows anti-Hebbian STDP responses: weight decrease when the prepulse comes before the postpulse, positive delay, and weight increase when the postpulse comes before the prepulse, negative delay. Such an STDP response has been shown to be useful in lateral inhibition. As it can implement a Winner Takes All mechanic and possibly be able to extract principal components from data.
I was looking at how to implement this and came across the NengoSpa WTA network where Ensembles inhibit all other Ensembles once activated.
While I was thinking about lateral inhibition I thought of creating a recurrent connection such that one neuron (a) inhibits all other neurons (b) inside 1 ensemble such that (a) → (b) is an inhibiting connection, and (a) & (b) are in the same Ensemble.
Would something like that even make sense? or is it more useful to make a network similar to the WTA network where ensembles inhibit other ensembles.
How would you implement such a recurrent connection?

Hi @Flowhill,

Your question about whether or not to use an ensemble (vs a single neuron) for your use case is, unfortunately, dependent on the input signals you expect the network to receive. For the NengoSPA WTA network, the reason why it uses ensembles is because it’s meant as a general purpose network, capable of processing input signals across a wide range of values (from -1 to 1), which is difficult to do using just a single neuron. If your network, however, is only expecting to process signals that a single neuron can represent well, then using a single neuron vs an ensemble might be the approach (a single neuron will be less resource intensive than using an ensemble). Additionally, if you are attempting to recreate biological data (which typically don’t record from ensembles of neurons), then using single neurons may also give you a better recreation of the biological data.

The implementation of a lateral inhibition network, but using single neurons, is actually quite similar to the NengoSPA WTA network approach. However, while the NengoSPA example connects ensembles together, if you want to do lateral inhibition within a single ensemble, all you need to do is connect to/from the neurons directly:

ens = nengo.Ensemble(n_neurons, 1, <additional_ensemble_parameters>)
nengo.Connection(
    ens.neurons,
    ens.neurons,
    transform=inhibit_scale * (np.eye(n_neurons) - 1.0),
    synapse=inhibit_synapse)
# Values I'd recommend for `inhibit_scale`: 3-5
# Values I'd recommend for `inhibit_synapse`: 0.005 (default)

I should note, however, that in Nengo, all neurons in an ensemble are initialized with random parameters (random gains and biases), so you’ll probably want to set these values (using max_rates or intercepts, or by setting the gain and biases) directly instead of having Nengo initialize them for you.

1 Like

Thank you very much for your answer. The way that (np.eye(n_neurons) -1.0) is used is very elegant. I didn’t notice that the first time around.
For now I started working on my experiment and use single neuron ensembles. Would there be any significant difference in using
(a) 1 ensemble with X neurons (in my case 9) connecting the nodes that give input to each of the neurons (via ensemble.neurons[0], ensemble.neurons[1] etc.)
and
(b) X ensembles with 1 neuron (then 9 ensembles in my case) connecting to the ensembles.neurons from the nodes.

Extra question:
I recall it having to do something with encoders and decoders, but what is the difference between connecting a node to the ensembles nengo.Connection(node, ens) and connecting the node directly to the neuron? nengo.Connection(node, ens.neurons)?

The only difference you should encounter between the two scenarios is the speed at which your simulation runs. But, for the number of neurons you are using (9), there shouldn’t be any noticeable impact between the two scenarios.

However, if you have a lot of neurons, then approach a would run faster than b. This is because with a, the connection weights can be computed as one matrix multiplication operation which is optimized using numpy to run across multiple CPU cores. However, with b, instead of one matrix multiplication, you get X independent multiplication operations, which is not optimized to run across multiple CPU cores (so it ends up being slower).

That is correct! When you connect directly to the ensemble (i.e., nengo.Connection(..., ens)), you are using the ensemble in NEF mode. In NEF mode, the encoder and decoder of the ensemble are applied (i.e., multiplied) to the input & output signals from the ensemble. Thus, the flow of information through an ensemble goes something like this (simplified):

signal input → $\times$ connection weight → synapse applied → $\times$ encoder → neuron non-linearity → $\times$ decoder → signal output

But, if you connect to the .neurons attribute of the ensemble, it bypasses the multiplication by the encoder and subsequent multiplication by the decoders. So the signal flow looks something like this:

signal input → $\times$ connection weight → synapse applied → neuron non-linearity → signal output

1 Like