I’m a beginner, so I have some confusion about the essemble and connection functions.

how can we define the “dimension” in the essemble, eg. essemble(60, dimension=2), so it’s a neuron array with the size of (30,2) or (60,2) or something like this. Because I see a definition essemble(1,dimension=2), but how can 1 neuron to be 2 dimensions?

What’s the differences between “transform” and “function” in the connection function, and what’s the meaning of “transform=[[-1]] * error.n_neurons”, it builds weights of -1 with a number of error.n_neurons?

If you are a beginner to Nengo, I recommend checking out the Nengo summer school YouTube playlist which goes through the basics of Nengo, and the basics of the Neural Engineering Framework (NEF) that Nengo uses to implement some of it’s core algorithms. This page on our Nengo documentation site also provides a quick summary of the NEF algorithm and Nengo code to illustrate said algorithm. If you want to know about the code details of the NEF algorithm, this page implements it in standard (i.e., not-Nengo) Python (this page is a useful reference if you want to re-implement the NEF in another programming language).

As for your questions in particular though, I’ll answer them below, but without getting too deep into the details of Nengo and the NEF.

In Nengo, the dimensionality of an ensemble (or a neuron) is really an abstract concept. While they are present and what the user usually see when implementing Nengo ensembles, neurons don’t actually do their computation in this “dimensional” space. Rather, when a network is built (in Nengo this is when you make the with nengo.Simulator() call), each neuron is assigned a “preferred direction vector” or “encoder” which maps the dimensional space into some input current that is injected into the neuron. If you are familiar with machine learning networks, the encoder is similar to input weights to a neuron.

On the output side, spikes generated by neurons are weighted by a set of decoders that project the spikes back into the dimensional space. These spikes are then smoothed by post-synaptic currents to give you continuous (i.e., non-spiking) values in this dimensional space. Note that because the encoders map input vectors into neuron activity, and the decoders map neuron activity into output vectors, the encoders and decoders don’t have to be of the same dimensional size (e.g., the encoders can be 4D, while the decoders be 2D).

A note about the encoders: Encoders determine which vector (or the range of vectors) that the neuron is most sensitive to. That is to say, if an input vector is very similar to the neuron’s encoders, the resulting neuron activity will be high. Conversely, if the input vector is not similar to the neuron’s encoders, the resulting neuron activity will be low. In ensembles with few neurons (or 1 neuron as you mentioned), this effect becomes very apparent. The output of the ensemble will be quiet for most of the vector inputs to the ensemble, except for vector inputs that align to the neuron’s encoders. Since we want the neural ensembles to represent a range of vectors (if possible, to represent all possible vectors [of a certain magnitude] of the input dimensionality), we generate more neurons in the ensemble. Since the encoders are (by default) randomly generated, the more neurons there are, the more chance the input vector space is collectively covered by all of the neurons.

The transform and function parameters of a Nengo connection are fairly similar and both serve to provide Nengo with information on how to solve for the connection’s decoders. The transform parameter is useful when you want the decoders to compute some sort of matrix multiplication (note that a scalar multiplier can be considered a matrix multiplication). The function parameter is useful when you want the decoders to perform some complex mathematical operation that can’t easily specified as a matrix multiplication. Note that you can use both together as well.

It is important to note that since both transform and function just serve to generate the decoder solving data, things that generate the same data will result in the same decoders (and thus identical networks). As an example, the two pieces of code below will generate identical networks:

with nengo.Network() as model:
ens = nengo.Ensemble(30, 1, seed=0)
out = nengo.Node(size_in=1)
nengo.Connection(ens, out, transform=2)
# Apply a scalar multiple of 2 to the output value of ens

with nengo.Network() as model:
ens = nengo.Ensemble(30, 1, seed=0)
out = nengo.Node(size_in=1)
nengo.Connection(ens, out, function=lambda x: x * 2)
# Multiply the output value of ens by 2

That is sort of correct. The code specifies that the output of the preceding ensemble should be multiplied by a matrix of size 1 x error.n_neurons, and that all elements in that matrix has a value of -1. I.e., the matrix looks like this:

You’ll notice that the matrix is larger than it typically would be. Typically, transformation matrices should be of the shape: post_inp_dim x pre_out_dim (i.e., in the case where both pre and post ensembles are of dimensionality 2, the transformation matrix should be a 2x2 matrix). The code you are referencing here uses a non-standard ensemble-to-neurons connection (since the “post” object of the connection is a .neurons attribute). This means that this connection bypasses the neural encoders and directly connects to the neurons themselves. We typically use this to implement inhibitory connections, where we want the input signal to directly affect the input current to the neuron, rather than having encoders map the input signal to the input current.

I understand that all that I have described above is probably very confusing to you, so if you have further questions, please don’t hesitate to ask them here!