# How the function is applied in connection

Hi,

I was looking at the controlled oscillator example: Controlled oscillator — Nengo 3.2.1.dev0 docs

In this example, the feedback connection has a function that computes values across the dimensions of the connections.

I was wondering how the function is actually applied in Nengo.

I read that when the function is used in the connection, the pre object must be an ensemble. If that is the case, is the order that you apply the function as follows?

1. the spiking signals from the neurons of the pre ensemble are first decoded through the connection synapse (0.1 in the example)
2. then the function is applied to the decoded values.
3. The values after the function computation are the output of the connection.

Hi @corricvale, and welcome back to the Nengo forums! That’s not actually how the function works. In this post, I walk through several ensemble & connection properties, and the important line that pertains to your question is this:

To summarize, the process by which Nengo uses to obtain the decoders is as follows:

• For some set of evaluation points, obtain the corresponding neural activity
• For the same set of evaluation points, obtain a set of “output” data determined by the function and transform specified for the connection.
• Solve for decoders that, when applied to the neural response curves, maps the evaluation points to the desired set of output data.

I should note that in Nengo, the decoders are associated with the connection object that is “computing” the function. I.e., you can have multiple decoders per Nengo ensemble (if you have multiple connections originating from that ensemble).

I am puzzled at this point. I posted a similar question https://forum.nengo.ai/t/how-do-you-access-an-ensembles-decoder/2135

In the controlled oscillator example https://www.nengo.ai/nengo/examples/dynamics/controlled-oscillator.html, the 3D feedback connection has a function:

``````def feedback(x):
x0, x1, w = x  # These are the three variables stored in the ensemble
return x0 - w * w_max * tau * x1, x1 + w * w_max * tau * x0, 0

``````

which always return 0 for the last dimension.
Therefore, as you said, the decoders for the last dimension of the connection are all 0.

But if I understand correctly, the input for the function has to be a real value, not spiking signals from the neurons. Therefore, wouldn’t the spiking signals from the ensemble must be decoded first before the function computes the value? If so, then obviously, decoder values solved after the function has been computed can’t be used to solve the input of the function as it is 0 and the function will always read the w value as 0.

It seems like there is an intermediate decoding process to compute the input of the function first.
If so, is there a way to access that intermediate decoder values?

That is correct. I’ve posted a response to your question there demonstrating how to obtain the decoders for the feedback connection.

This depends on what exactly you are asking. There is a difference between the steps used to solve for the decoders, and how these decoders are actually applied to the spiking signals.

When solving for the decoders, yes, the inputs to the functions are real valued. Recall that the decoders serve to map a set of neuron firing rates to a desired decoded output (this output can be some function as well). For the neuron types included in Nengo, even if you are using spiking neurons, the decoder solving step will use a rate-approximate of the neuron’s activation curve. Since a neuron’s firing rate (for a given input) is a real-valued number (and not spikes), the solver algorithm will have no problem solving for the decoding weights that achieve the desired function.

The application of the decoders to spiking signals (to produce the decoded output) is an even more straight-forward process. Since the decoders are just weights, the spiking signal is weighted (multiplied) by the decoders, and then filtered by the connection’s synapse (since the operations are commutative, the order of filtering and decoder multiplication doesn’t matter).

@xchoo

For the neuron types included in Nengo, even if you are using spiking neurons, the decoder solving step will use a rate-approximate of the neuron’s activation curve. Since a neuron’s firing rate (for a given input) is a real-valued number (and not spikes), the solver algorithm will have no problem solving for the decoding weights that achieve the desired function.

Wouldn’t you still need some sort of decoders to derive the values for the input of the function even with the rate-approximate neuron model? What I understand is as follows:

I mean, for the solver to map the outputs and the spiking rates, it must know the desired outputs and the spiking rates at the desired outputs first. If I understand correctly, the solver is basically doing this task: “Given these spiking rates and these desired outputs, find the decoder values that will map the desired rates to the desired outputs best”.
Now with the function applied, it would be: “Given these spiking rates and these desired output of the function, find the decoder values that will map the desired rates to the desired output of the function best”. (These decoder values are what I get when I probe the connection).

The question is, how does it know what the outputs of the function are? For any system, to know the output of the function, you need to know the inputs. To derive the inputs to the function from the spiking signals (even when it’s rate approximated), wouldn’t you need decoder to map the rates of the spiking signals to some values? In the controlled oscillator example, yes the decoder values on the feedback connection for the last dimensions are all 0 as the function just returns 0. However, it doesn’t mean that the inputs to the function for the last dimension are all 0. My question is that how does Nengo converts the rates of spiking to the inputs of the function?

Or are you saying that the rates are used directly as inputs for the function?

Thank you!

Hi @corricvale,

You are absolutely correct. I did neglect to fully explain one crucial part of the decoder solving (because it can be done in a couple of ways) step, and that is the process by which the inputs are derived.

In order to understand this process, I first need to explain how information flows through the neuron (i.e., how an input goes into a neuron to generate some output). In Nengo (and the NEF), as well as in other neural network architectures, the computation follows a similar schema:

input → input weights → neuron non-linearity → output weights → output

Before continuing, I should note that for multi-layer networks, some neural networks combine the input and output weights into one “weight matrix”:

input → L1 input weights → L1 non-linearity → L1 output weights → L2 input weights → L2 non-linearity → L2 output weights → output

becomes:

input → L1 input weights → L1 non-linearity → L1-L2 weight matrix → L2 non-linearity → L2 output weights → output

In the default mode of operation (i.e., using the NEF algorithm), the input weights are called “encoders”, and the output weights are called “decoders”. What’s different about Nengo compared to other neural network training software is that Nengo uses the NEF algorithm to solve for a set of decoders that “compute” a user-determined function, whereas other neural network training software (e.g., TensorFlow) use a learning algorithm to tune these weights. Each method has their own advantages and disadvantages but that’s beyond the scope of the discussion here.

To summarize, in Nengo, an input signal is multiplied with a set of encoders (in Nengo encoders are randomly generated by default) that “convert” the input signal into current that is fed into the neuron. This input current is then put through the neuron activation function to determine what the neuron’s activity (for rate neurons this would be the neuron firing rate) would be for said given input. The neuron activity is then multiplied with the decoders (solved through the connection function) to produce an output signal.

As for the actual process of generating the function inputs (to the decoder solver), there are two ways you can do it. The naive way is to assume that any valid input signal presented to the neurons will map on to one of the possible firing rates on the neuron’s activation curve. With this assumption, you can then use the entire activation curve (defined between some range of input currents) as the input data to the function that the decoder solver must solve.

The naive approach, however, has the disadvantage that it generates a lot of data (depending on the resolution of the neuron’s activation curve). Nengo thus uses an alternative approach, and that is to generate a random set of input values (known as evaluation points), which are then multiplied by the encoders, and put through the neuron’s activation function to obtain the respective neuron’s firing rate (which can then be used as the function’s input data). If you look at the API for the `nengo.Connection` object, this is what the `eval_points` parameter specifies. Note that evaluation points can also be specified on the `nengo.Ensemble` object which will be used if multiple output connections are created from said ensemble. By default, the evaluation points are chosen from the unit hypersphere (the hypersphere where the radius is 1) determined by the ensemble’s dimensionality.

Once again, to summarize, for each ensemble, Nengo creates a set of evaluation points. These evaluation points are multiplied by the ensemble’s encoder, and then used to compute the corresponding activities for each evaluation point. These activities are then used as input data for the solver to solve for the connection’s decoders.

1 Like

@xchoo

It clarifies a lot.

I may ask one more question if that doesn’t bother you.

I guess then those decoders values, that were solved through the connection function, are the one that you see when you read the weight of the feedback connection (connection that has function implemented). I access it by reading: sim.data[feedback connection].weights.

If that is the case, then if I imagine that there is another model that is in exactly same setting as the controlled oscillator model (same encoders, bias, weights for the ensembles and the connections). Then if I replace the feedback connection to just a normal connection that that does not have the any function implemented, and overwrite the weights of the connection to the weights that were stored in the feedback connection in the original model, would the two models behave identical?

That’s correct! Although, in Nengo, you can’t modify modify the decoders of a connection once it’s made. The decoders are a read-only property of the connection. In order to set a connection’s decoders to a specific value, you’ll need to use the `NoSolver` solver for the `solver` parameter of the Nengo connection, like so:

``````nengo.Connection(..., ..., solver=nengo.solvers.NoSolver(weights, weights=False))
``````

Here’s some example code to demonstrate the use of the `NoSolver` solver. Note that if you want to use the decoder values, you need to transpose the decoders to get it to work.