New to Nengo & Confused about some concepts

Hi guys,

I am new to Nengo and I am confused about some concepts that Nengo uses and was wondering if I could get an explanation suitable for a beginner to understand here.

  1. What is an encoder in terms of neural networks?
  2. What do the parameters ‘intercepts’ and ‘max rates’ actually do?
  3. What is the ‘radius’ of a neural ensemble?
  4. Does connecting objects in Nengo simulate the behavior of a synapse if so then what is the point of the ‘synapse=’ parameter & what does the number afterwards do?
  5. Why are we allowed to perform functions over a connection? What concept in neural networks is this related to? My understanding was the synapse just connected neurons and held the weight.
  6. What is the transform parameter do to the connection?
  7. What is a “dimension” of an ensemble? I understand the fact that it allows you to store an additional real number in the ensemble and that the real number must be stored over many neurons for precision and to average out the noise, but is there a deeper reason?

Sorry if these questions sound trivial but I couldn’t find any good tutorial or guide to Nengo that explains these concepts. Any help is greatly appreciated. Thank you in advance.

4 Likes

This paper gives a quick introduction to some of these concepts, which might be helpful http://compneuro.uwaterloo.ca/files/publications/stewart.2012d.pdf. I’ll give some brief answers here, but I’d definitely recommend reading the paper as it explains things in more detail.

  1. What is an encoder in terms of neural networks?

You can think of the encoders as representing the preferred stimulus of a neuron (also sometimes referred to as a “tuning curve” in neuroscience). It represents the input stimulus that will cause that neuron to respond most strongly. In practical terms at the neural network level, you can just think of these as part of the connection weights on a given connection.

  1. What do the parameters ‘intercepts’ and ‘max rates’ actually do?

These control the gain and bias on the neurons. The intercept is the input value at which that neuron will begin to respond, and the max rate is the magnitude of the neuron’s response when the input value is equal to the max value (as defined by the radius parameter, discussed next). Given a neuron model (e.g. LIF or rectified linear), if you have a desired intercept and max rate you can calculate what the corresponding gain and bias of the neuron should be. And then it is the gain and bias that are actually being simulated when the network runs.

  1. What is the ‘radius’ of a neural ensemble?

Mathematically, the radius is used in the intercept/max rate calculations, as described above. Practically, the radius defines the expected magnitude of the inputs to that ensemble. E.g., if I expect the inputs to an ensemble to have magnitude in the range (-1, 1), then I would set the radius to 1. If I expect the inputs in the range (-2, 2), I would set the radius to 2. This isn’t a hard limit, you can still simulate the network with inputs outside the expected range. But the performance will probably not be as good for inputs outside the range.

  1. Does connecting objects in Nengo simulate the behavior of a synapse if so then what is the point of the ‘synapse=’ parameter & what does the number afterwards do?

Yes, connecting objects includes a simulation of synaptic behaviour. The default synapse is a Lowpass filter, and the number is the time constant of that filter. Or you could use any of the other synapse models in nengo.synapses.

  1. Why are we allowed to perform functions over a connection? What concept in neural networks is this related to? My understanding was the synapse just connected neurons and held the weight.

The function you specify on a connection is defining what function you would like that connection to perform. Nengo will then automatically solve for a set of connection weights that best approximate that function. So in the end the actual network just contains connection weights and synapses, as you expect.

  1. What is the transform parameter do to the connection?

This just applies a linear transformation to the output of the function we talked about above. Or, in the case of non-decoded connections (e.g., directly connecting to ensemble.neurons objects), this defines the connection weight matrix. In either case, again you just end up with a set of connection weights connecting the two ensembles, the transform is a mathematical abstraction.

  1. What is a “dimension” of an ensemble? I understand the fact that it allows you to store an additional real number in the ensemble and that the real number must be stored over many neurons for precision and to average out the noise, but is there a deeper reason?

I think this is best explained in the paper above, hopefully this will make more sense after reading that.

4 Likes

Hi drasmuss,

Are e and x normalized? It says from the definition of max_rates that

max_rates : Distribution or (n_neurons,) array_like, optional
The activity of each neuron when the input signal x is magnitude 1
and aligned with that neuron’s encoder e ;
i.e., when dot(x, e) = 1 .

a maximum of dot(x, e) = 1 is only possible if both e and x are normalized?

Hi!

To answer your question, the neuron’s encoders e are normalized. However, the input signal x is not. The max_rates value for a specific neuron is defined such that it happens when the dot(x, e) = 1. The implication of this is that it is possible to drive a neuron beyond what is specified with the max_rate value. The neurons response curve will follow it’s activation function up to it’s true maximum firing rate, which is typically 1/tau_ref.

The image below is response curve of an example LIF neuron with a tau_ref = 0.002, a tau_rc = 0.02, and the intercept has been set to be at x = -1, with a max_rate = 200 (at x = 1). As you can see, it is possible to drive the neuron with a value of x that exceeds 1, which results an output firing rate that exceeds the defined max_rate value.

Hi xchoo,

Thank you very much for the quick reply. Your explanation was very clear. Just to be sure, the encoders are normalized by default but there is also an option to set this option to False in nengo.ensemble.normalize_encoders. Still, max_rates is defined for dot(e,x) = 1, right? Nothing will change?

That is correct. The encoders are normalized by default, unless you set .normalize_encoders=False or pass in normalize_encoders=False when you create the ensemble.

And your second question is correct as well. The max_rates are defined for the point where dot(x, e) = 1. It is however, recommend to use normalized encoders, as the extra scaling factor with non-normalized encoders can unintentionally affect the neural ensembles if they are not accounted for, or if you are unaware that they are not normalized.

Hello xchoo, one more question on this topic. The dimensionality of an ensemble should match the dimensionality of the input right?

Let’s say I have an input image of 8x8 pixels flattened to a 1x64 vector. I then pass this to a Layer of 100 neurons. Am I right to define this layer as
layer1 = nengo.Ensemble(n_neurons=100, dimensions=64)
Or should dimensions represent the data being represented. In this case, each image is a number from 0 to 9. Should dimensions be 10( the number of data representations) or 64( the dimension of the input vector)?

In most cases, the dimensionality of the ensemble should match the dimensionality of the input. In your example, you would want to define an ensemble of 64 dimensions, one for each of the dimension in your input vector. There are a few notes to this statement though.

To be more precise, the dimensionality of the ensemble should match the dimensionality of any resulting transformation applied to the input connection. The transformation applied to an input connection defaults to an identity transform (unless changed by the user), and in those cases the dimensionality of the ensemble should match the dimensionality of the input.

Another note I should mention is that we sometimes use ensemble arrays to represent high dimensional vectors. These are used as they are generally more efficient to compute (internally, Nengo computes the decoders for an ensemble using a matrix operation that scales with respect to the number of neurons squared, and we typically linearly scale the number of neurons to the dimensionality of the ensemble. Using ensemble arrays makes the overall scaling linear), and more accurate (with ensemble arrays, there is less cross-talk between the different dimensions in the information being represented). However, ensemble arrays can only be used when the output function (decoders) do not require interactions between dimensions.