LIF neuron activity

The lines circled in red are the individual spikes emitted from the neuron. That is correct.

The plot circled in blue is the plot of the neuron’s membrane voltage. When the membrane voltage exceeds a certain level (in this case, 1), a spike is emitted from the neuron. That is why the two plots are correlated.

That plot is the neuron membrane voltage. In that period of time, there is no input to the neuron (see the orange line in the top plot), which means that there is no input current into the neuron. Since the neuron is an LIF neuron (i.e., leaky), with no current input, the membrane voltage slowly decreases. As the input comes backup, input current is fed into the neuron, and the membrane voltage rises again.

This behaviour is a result of the NEF (neural engineering framework). This Youtube video series describes the NEF is detail and the second video (at about the 1hr 50min mark) describe how using more neurons will result in a better representation of the input signal.

In Nengo, if you create a connection without any weights (i.e., without using the function or transform parameters), Nengo will create a connection with the default weight of None. Internally, when Nengo sees this, it will build the connection and solve for decoders to approximate the identity function (i.e., the ensemble output will approximate the ensemble input).

To obtain the solved weights (i.e., the weights used when the simulation is running), you can probe for it using:

conn = nengo.Connection(...)
p_weights = nengo.Probe(conn, "weights", ...)

If you use the Nengo probe, it is advisable to set the sample_every parameter to reduce the amount of RAM your simulation uses (see an example here).

You can also obtain the solved weights by using this code:

with nengo.Simulator(model) as sim:
    weights = sim.model.params[conn].weights

As a note, nengo.Node objects do not have decoder weights.

The decoders of an ensemble are solved to approximate a given function (see the NEF lectures). As part of the solving process, a bunch of evaluation points are internally generated by Nengo and used to solve for the decoders. This means that an ensemble can have decoded weights even without an input connection.

In your network, if you don’t connection the A node to ensemble_A, then the output of ensemble_A will just be 0.

I am glad to your response and I understand your point, but I mean why conn1, conn2 weights are None, but conn3,conn4 are vice versa while I don’t use the ‘function’ or ‘transform’ with 4 connections.

I don’t understand the question you are asking here… When I create the network as you have, all four connections have weights that are None (when you probe them using sim.model.params[connX].weights).

Can you provide some code regarding your question? Maybe show how you are obtaining and printing out the weights?

Really ??? because when I create the network, conn1 and conn2 weights are None but conn3 and conn4 aren’t too ( I edit n_neurons to 2 instead of 100)

also, I want to ask you a question about connection neurons in an ensemble. I read a paper written Nengo and that said neurons in the ensemble are connected by weight but I am not sure it is true? So can you tell me the answer?
image

Ah. I found a bug in the network I was implementing. You are correct in the observation that the weights for conn1 and conn2 are None, while conn3 and conn4 are some value. However, my original statement is still correct:

The .weights attribute returns the decoder weights for connections to / from ensembles, and the full connection matrix for connection from .neurons objects (e.g., ens.neurons). In your model conn1 and conn2 are connections from Nengo nodes, and thus they do not have decoder weights. Thus, the returned value is None. For nengo.Nodes, a None weight is treated as not changing the value the node is outputting.

If you create an ensemble on its own, and create a connection to it, e.g.:

with nengo.Network() as model:
    ens_B = nengo.Ensemble(10, 1)
    nengo.Connection(ens_A, ens_B)

then no, there will be no weights between the neurons in the ens_B population. To achieve a network similar to what you have in the image, you’ll need to create a recurrent connection on the ens_B population:

with nengo.Network() as model:
    ens_B = nengo.Ensemble(10, 1)
    nengo.Connection(ens_A, ens_B)
    nengo.Connection(ens_B, ens_B)  # Recurrent connection

Yepp. Thank you so much, I understand what you said. Also, I wonder when an ensemble has many neurons( i.e 5 neurons) so when a spike is fired, how does the Nengo know it regarding what spike?

Yepp. I see however I wonder when I create a recurrent connection on the ens_B ( n_neurons = 2), the connection matrix has shape (1,4) due to each neuron in the ensemble will connect each other but the real matrix’s shape is (1,2)


image

In Nengo, when you are connecting an ensemble to another ensemble (even if the destination ensemble is the same as the origin ensemble), the connection weights are always the decoders. In this case, ensemble_B has 2 neurons and is 1D, so the decoders will be a 2x1 matrix (note, depending on which way you set up your matrix convention, it can also be 1x2).

When connecting two ensembles, the weight matrices look like this:

input -> encoders -> ensemble -> decoders -> encoders -> ensemble -> decoders -> output

If the connection is a recurrent one, it would look something like this:

             ,-----------------------------,
             V                             |
input -> encoders -> ensemble -> decoders -'

Note that if you connect directly to the neurons (using the .neurons attribute), you should get a 2x2 matrix. I don’t think there will be any instance where you will get a 1x4 connection matrix.

I am not sure what you mean?

In Nengo, the dynamics of how the neuron behaves is a function of the amount of current being input to the neuron, and the amount of voltage across the neuron cell membrane. However, for neurons to perform useful computation, there needs to be a way to “convert” real valued numbers (i.e., scalar values, or vector values) into the corresponding input current, as well as to reverse this conversion to be able to “convert” spikes or currents back to real valued numbers.

Nengo uses an algorithm called the neural engineering framework to solve for the weights needed to do this conversion. You can read more about the algorithm here and here, as well as through this series of videos. In Nengo, the weights that convert real-valued numbers into the neuron input currents are called “encoders”, and the weights that convert filtered spike trains back to real-valued numbers are called “decoders”. This is what I am referring to when I refer to the decoders.

I should note that “encoders” and “decoders” are purely mathematical concepts. If you look at just the weights between a population of neurons, you can’t really tell they are there, because they are combined together to form the full connection weight matrix. But, in Nengo, we keep the encoders and decoders around because it makes the connections conceptually easier to work with.

Thanks for your useful answers. However, I have a bit of the confusion here. There are some bekow issues.

  1. As you said,

so this input is the encoded value, right? image

Also, I read a paper written about Nengo, I want to define that the spike trains in the bottom A panel are spikes that are fired, are not the encoded values. The details are presented here Frontiers | Nengo: a Python tool for building large-scale functional brain models | Frontiers in Neuroinformatics
image

  1. Furthermore, I said

    However, in this example The NEF algorithm — Nengo 3.2.0.dev0 docs, I don’t see where the outputs of an ensemble A are decoded before entering an ensemble B. Instead of that is for each neuron A that spikes, increase the input current of all the neurons in the ensemble B it is connected to by the synaptic connection weight.
    image
    So I wonder that whether the spike outputs of the ensemble are always decoded into the continuous value then are encoded to enter the next ensemble or not. Or we can simply understand as you said above, actually they aren’t decoded, increase the input current of all the neurons in the next ensemble

In a way it is. In that code, the input is the input x value encoded by the encoder, and multiplied by the neuron gain. This is then added to the bias current. The input here is thus the total input current to the neuron.

The encoder encodes the x input into an input current. As above, this input current is computed as:

J_{input} = \vec{x} \times \vec{e} \times \alpha + J_{bias}

where \vec{e} is the encoder (note, this can be a scalar if \vec{x} is a scalar), \alpha is the neuron gain, and J_{bias} is the bias current. The spikes that you want to plot are a function of the input current to the neuron. Thus, the spikes from the neuron are always a function of the encoded values. The example I linked in my first post, as well as this “NEF Summary” example all demonstrate how to obtain these spike plots.

In the NEF algorithm code, the decoders (of A) and the encoders (of B) have been combined to form the connection weight matrix:

# compute the weight matrix
weights = numpy.dot(decoder_A, [encoder_B])

As I said in my previous post:

So, to answer your question of whether the values are being decoded (at A) and then encoded (at B), the answer is yes. It’s just that both steps have been combined into a single operation. However, as I noted above, in Nengo, these operations are kept separate (for default connections) so as to make the connections and ensembles easier to work with.

Hi @xchoo ,
I am a newbie on Nengo so I have started the connection weight on Nengo, therefore, I am confused when reading your post and Connections in depth — Nengo 3.2.0.dev0 docs.

  1. I know **In decoded connections, weights are automatically determined through decoder solving ** so whether it has the connect weight matrix between neurons from pre-ensemble to post-ensemble (i.e weights = numpy.dot(decoder_A, [encoder_B])) or not.
  2. Similarly, in the direct connections (i.e connect ens1.neurons to ens2.neurons), whether the ensemble has decoded weight as in decoded connections.
  3. A question is exposed what is the different essence between the connection of neurons to neurons and ensemble to any other object?

Hi @nkchii, and welcome to the Nengo forums. :smiley:

If I understand your question correctly, you are asking if a decoded Nengo connection (i.e., nengo.Connection(ensA, ensB)) contains the full weight matrix? The answer to that question is no. In Nengo, “weights” are kept in several different places. The ensemble object contains the encoders, and a connection object contains a connection weight. When a connection is made from an ensemble, the connection weights are interpreted as decoders. To get the “full” weight matrix, you’ll need to perform the matrix multiplication of the post-ensemble encoders with the connection’s decoders manually.

The answer here again is no. There are 2 points here to be made. First, a connection from a neuron object will have connection weights interpreted as the “full” connection weight matrix (in the traditional sense). Second, when connecting to a neuron object, the encoders for the ensemble in which the neurons belong to will be ignored. Thus, performing a neuron-to-neuron connection will result in a “full” connection weight. In this case, separating the encoders and decoders from the connection weight matrix is difficult, if not impossible.

The context of the connection weight matrix depends on the pre and post objects of the connection. You could have 4 different types of connections:

Ensemble to ensemble

nengo.Connection(ensA, ensB)

As described above, the connection weights in the connection are treated as decoders, and the ensemble’s (ensB) encoders are used as well.

Ensemble to neuron

nengo.Connection(ensA, ensB.neurons)

Here, the connection weights in the connection are treated as decoders, but ensB’s encoders are ignored (bypassed) in the computation. This method of connection is used frequently to implement inhibitory connections.

Neuron to neuron

nengo.Connection(ensA.neurons, ensB.neurons)

Here, the connection weights in the connection are treated as the “full” connectivity matrix, and ensB’s encoders are also ignored. This is typically used in connections with learning rules, where the learning rule needs to modify the individual elements in the full connection weight matrix. For such learning rules, using a decoded connection would not work since the learning rule would not have access to the full weight matrix (only the decoders).

Neuron to ensemble

nengo.Connection(ensA.neurons, ensB)

Here, the connection weights in the connection are treated as a connectivity matrix, but ensB’s encoders are also included in the computation. While technically possible, I can’t think of any typical use cases for this type of connection. :smiley:

Note:
You can also have connections to and from other objects (namely, nengo.Nodes). For nodes, connections from them are treated as just a connectivity matrix (pure math). And connection to nodes are treated the same too.

Thanks @xchoo so much due to your helpful answers. However, did you say that the learning rule needs to modify the individual elements in the full connection weight matrix but then you noted the learning rule would not have access to the full weight matrix (only the decoders). Maybe is it wrong?

Hello @xchoo, as you said the learning rule will update the individual elements in the full connection weight matrix, but when I print the full connection weight before and after connection, nothing changes. What is the reason for this? Also, I wonder that if I use the direct connection, I need to initiate weight through the transform parameter so How do I know what is the initial weight good to achieve pretty accuracy? Finally, I want to classify binary by pure SNN ( that means Node, Ensemble, Connection) but it hasn’t had predict attribute as nengo_dl so What can I do?


And this is my notebook
CNN_SNN_binary.ipynb (22.0 KB)

@nkchii

My statement about the learning rule not having access to the full weight matrix was about decoded connections. The quote below, emphasizing that:

Sorry @xchoo, can you answer me

That is correct, the learning rule will update the individual elements in the full connection weight matrix. The reason why you are not seeing any changes is because the code you are using gives you the initial weights set on the connection. This value is not updated as the simulation progresses. To get the weights as the simulation is running, you can do one of two things:

  1. Get the connection weights from the sim.signals dictionary, like so:
weights = sim.signals[sim.model.sig[conn]["weights"]]
  1. Get the connection weights using a Nengo probe:
with model:
    probe_weights = nengo.Probe(conn, "weights", sample_every=<sampling_interval_in_secs>)

with nengo.Simulator(model) as sim:
    ...

print(sim.data[probe_weights])

If you want to get the learned weights at the end of a simulation run, you can set the sample_every parameter to the runtime length of the simulation:

probe_weights = nengo.Probe(conn, "weights", sample_every=<simulation_runtime>)

I’m not entirely sure what you want to achieve here, since your previous question indicates you are applying a learning rule to the connection as well. When we construct our Nengo models, connections with learning rules are typically initialized with a random function to demonstrate that the learning rule actually has some effect on the connection.

If you want to create a fully connection weight matrix that implements a specific function, you need to use the NEF algorithm to do so. That is to say, you will need to get Nengo to manually compute the decoders for you, and then multiply it with the encoders to get the full connection weight matrix. There are two methods for coding this, but both methods require you to create a Nengo simulator to build the model. The model build process is what creates the encoders and decoders. Since the Nengo simulator object is created multiple times, you will also need to seed the ensemble objects, to ensure that they are given identical parameters across the different simulations.

So, the general approach for computing the weight matrix is:

# Define ensembles needed for connection weight matrix. Make sure they are seeded
with nengo.Network() as prebuild:
    ens1 = nengo.Ensemble(..., seed=<seed_val1>)
    ens2 = nengo.Ensemble(..., seed=<seed_val2>)
    conn = nengo.Connection(...)

# Create a simulator object to build the ensembles:
with nengo.Simulator(prebuild) as simbuild:
    # Extract out the ensemble parameters and compute the weight matrix
    ...

# If the weight matrix is to be used in a Nengo model, create the Nengo model with the same
# seeds as the `prebuild` model
with nengo.Network() as model:
    ...
    ens1 = nengo.Ensemble(..., seed=<seed_val1>)
    ens2 = nengo.Ensemble(..., seed=<seed_val2>)
    conn = nengo.Connection(ens1.neurons, ens2.neurons, transform=<weight_matrix>)
    ...

The actual computation of the weight matrix can be done in two ways. The first method is basically the NEF algorithm. You solve for the decoders on the pre population that computes the function you want, then you multiply it with the encoders of the post population to get the full weight matrix. Since you are doing everything manually, solving for the decoders requires some work. You’ll need to invoke the Nengo solver directly, and that requires a bit of work. The code to do this is outlined below (put the code in the # Extract out the ensemble parameters section of the code above:

# Reference to built ens1 object
built_ens1 = simbuild.data[ens1]

# Get the "x" values (evaluation points scaled by encoders) for ens1
x_vals = np.dot(built_ens1.eval_points, built_ens1.encoders.T / ens1.radius)

# Get the activity values corresponding to the x values 
activities = ens1.neuron_type.rates(x_vals, built_ens1.gain, built_ens1.bias)

# Create the solver, and use it to solve for the decoders of ens1 that compute a specific output function
solver = nengo.solvers.LstsqL2(weights=True)
decoders, _ = solver(activities, output_func(built_ens1.eval_points))
# # Note that if the ensemble is doing a communication channel, then no `output_func` call is needed:
# decoders, _ = solver(activities, built_ens1.eval_points)

# Compute the full weight matrix by doing the matrix multiplication with the encoders of ens2
weights = np.dot(simbuild.data[ens2].encoders, decoders.T)

The second method to compute the full weight matrix is to use the solvers parameter on a Nengo connection. If you create a connection with a solver that has weights=True, Nengo will make the connection with the full weight matrix. However, one important note is that this weight matrix contains the gains from the post population, and thus, these gains need to be removed to get the full connection weight matrix.

with nengo.Network() as prebuild:
    ...
    # Create the connection with `weights=True`
    conn = nengo.Connection(
        ens1, ens2, solver=nengo.solvers.LstsqL2(weights=True), function=output_func
    )

with nengo.Simulator(prebuild) as simbuild:
    weights = simbuild.data[conn].weights

    # Get the gains for ens2. Shape it so that an element-wise matrix multiplication can be done
    ens2_gains = np.repeat([simbuild.data[ens2].gain], ens2.n_neurons, axis=0).T
    
    # Divide the gains out from the weight matrix to get the full connection weight matrix
    weights = np.multiply(weights, 1.0 / ens2_gains)

NengoDL supports the creation and use of regular Nengo models with the predict/fit/compile/etc., TF functions. So, all you have to do is to create your Nengo model, then use the nengo_dl.Simulator object to call the predict function. Refer to this page for examples on how to use your (regular) Nengo model with NengoDL.

1 Like