Hi @Timodz!
I took a quick look at your code and Iāll try to address some of your questions below.
Lateral Inhibition
Syntactically, what you are doing is the way lateral inhibition would be implemented in Nengo. That is to say, to implement an inhibitory connection where the inputs come from a population of neurons, and inhibits the same population of neurons, youāll want a recurrent connection like so:
nengo.Connection(layer1.neurons, layer1.neurons, transform=inhib_weights)
However, I believe that you have made a slight error when defining the inhibitory weights. You used np.full(...)
to generate the weights, which creates a lateral inhibitory weights, as well as recurrent inhibitory weights. In effect, the neurons are inhibiting themselves, and I would be surprised if the neurons even fired at all.
For lateral inhibition, I believe the connection weights you want would be:
inhib_wegihts = (np.full((n_neurons, n_neurons), 1) - np.eye(n_neurons)) * -2
I used the -2
factor above as the minimum value (Iāll elaborate more on this below) youāll need for an inhibitory connection, but you can increase or decrease this value to change the strength of the inhibitory connection.
Homeostatis
This question really depends on what you want to achieve with āhomeostatisā. If you want to achieve a sort of normalization of neural activity across the whole ensemble, then using a homeostatic learning rule might be the way to go (I believe the Oja rule does homeostatis, but Iāll have to double check on that). But, if you just want neural activity that falls to a baseline value after some period of spiking, then the ALIF neuron is an appropriate use.
Event-based Simulations
Iām not 100% certain on this (Iāll message the devs and get back to you shortly), but I donāt believe there is an event-based mode in Nengo (nor does it support event-based processing).
Neuron Characteristics
Technically, a neural ensemble in Nengo can represent any input from -infinity to infinity. What the radius
parameter determines is the range of input values for which the encoders and decoders for the ensemble are optimized to best represent. This is part of the NEF algorithm and you can read about it here!
However, because you are using direct-to-neuron connections, the radius parameter does not affect the behaviour of your model. For direct-to-neuron connections, the parameter that determines the ārangeā of the neuron representation is the neuron intercept
value. The intercept value determines the represented value at which the neuron starts firing (see example). If you want the neuron to operate from a range of 0 to 1** (actually infinity), I would set the neuron intercepts to intercepts=nengo.dists.Choice([0])
.
A few notes about your model, by default, Nengo generates the intercept values from a random uniform distribution of -1 to 1. Thus, both your input_layer
and layer1
ensembles are being generated with randomly chosen intercepts, and that might explain some of the behaviour you are seeing as well.
The randomly chosen intercept values also explain why I chose the value of -2
in the inhibitory connection above. This is to account for an intercept that is generated at -1 (i.e., the neuron is active when the input is -1 through infinity), which an input signal of 1. Thus, to fully inhibit the ensemble in this scenario weāll need to offset the input signal by -2 to get an effective input to the neuron that is -1 (i.e., 1 + (-2) = -1).