Decoded vs Direct Connections

Is there any disadvantage/advantage on using direct connections instead of the decoded connections. Using decoded connections seems a bit confusing for me. My understanding is it is the inverse of the encoding process, wherein it reproduces the input, X(or some function of the input, f(x) to a neuron population from the output spikes? Whereas, Direct connections is as the name suggests, a weighted direct connection between neuron groups

Say I have two neuron layers/populations A and B. Layer A receives input directly from some external input, say, X. I want the output spikes of A to be fed to B ( fully connected).

1 Like

Mathematically, there is no difference between the NEF’s method of using encoders, decoders and a transform with what you refer to as “direct” connections. More specifically, in the fully connected weight, any one element $w_{ij} = \alpha_j \mathbf{d}_j \mathbf{e}_i$, where:

  • $\alpha_j$ is the gain of the $j^{th}$ neuron in B (note, the NEF transform would be rolled into this value)
  • $\mathbf{d}_j$ is the decoder for the $j^{th}$ neuron in B
  • $\mathbf{e}_i$ is the encoder for the $i^{th}$ neuron in A.

In practice, however, using decoded connections can speed up any computation involving the connection weight matrix. Assuming $n$ neurons in the A population, and $m$ neurons in the B population, using decoded connections results in (on the order of) $m + n$ number of multiplications. However, using a direct connection, you’d have to do $m \times n$ number of multiplications to compute the effect of the equivalent weight matrix.

Using decoded connections seems a bit confusing for me. My understanding is it is the inverse of the encoding process, wherein it reproduces the input, X (or some function of the input, f( x ) to a neuron population from the output spikes?

Yes, the decoding process is essentially the reverse of the encoding process. Consider the following scenario: You have a population of neurons tasked to represent an $N$-dimensional vector (just pure representation, no function computed).
The encoders serve to project that $N$-dimensional input into a space that neurons can work with (i.e., scalar current values). Once this is done, however, the information being represented in the ensemble remains in “neuron”-space (i.e., the firing rates of the neurons / or individual spikes from the neurons). In order to reproduce the input vector, you’ll need to project the neuron activities back into the $N$-dimensional space. This is what the decoders accomplish (by multiplying the neuron activities with a vector + some summation). But! The cool thing about the decoders is that it allows you to project the neuron activities into a space that is completely different from the input space, simply by changing the decoders used. Note that the output space can also be “warped” to approximate functions of the input space. That’s essentially how the NEF “computes” functions with neurons.

=========

I should point out that direct connections are not without it’s uses. We use it quite often to implement inhibitory connections, and when learning rules operate on the connection weight matrix instead of the factored encoder/decoder parts.

=========

As a side note, if you want to construct a “direct” connection in Nengo, it’s possible to do so with:

nengo.Connection(ensemble_a.neurons, ensemble_b.neurons, transform=weight_matrix)

Thanks. I understand how to make direct connections now, but I am still confused with decoders. My goal is just to build at least a two-layer SNN with connections initialized randomly and then trained using NengoDL on the 8x8 handwritten digit MNIST. After training, i will use the connection weights to hardcode the connections in a circuit. I may have to use decoders for efficiency so i think I should try to understand how it works and why.

Why do we want to map back the neural activity into the N-dimensional input space? If we want to compute f(x) = x^2 for instance, wouldnt it be easier to compute that classically before feeding into the neural network? If we want to compute a function that is hard to define, say a function that performs edge detection on the input how are these decoder weights specified? I should specify the function it wants to compute?, f(x) = x or some other function? When training the SNN, what happens to these decoder and encoders?

Also, say I have an input 8x8-pixel Image as an input to an SNN. I understand I cant use this directly, so I flattened it to a 1x64 vector. The first SNN layer has 64 neurons. I want the ith neuron to receive as its single input, only the ith pixel in the 1x64 input vector. It’s a one-to-one connection. what would be the parameters in nengo.connection

Alternatively, I wanted to modify the currents on each neuron in the SNN layer directly, but I’m not sure how to do that.

My goal is just to build at least a two-layer SNN with connections initialized randomly and then trained using NengoDL on the 8x8 handwritten digit MNIST. After training, i will use the connection weights to hardcode the connections in a circuit. I may have to use decoders for efficiency so i think I should try to understand how it works and why.

For this specific use case, because the connections between the two layers in the SNN are randomized, that connection won’t use decoders and encoders. As mentioned in a previous post, encoders and decoders can be combined to form the connection weights. Thus, you can reverse that formulation and “derive” the encoders and decoders from a set of trained connection weights by factoring the weights into their appropriate components (this is typically not easy to do though, so connection weights trained in this way are typically left unfactored). In your example, the one place you might need to use decoders on the readout layer (or the second layer’s output).

Why do we want to map back the neural activity into the N-dimensional input space? If we want to compute f(x) = x^2 for instance, wouldnt it be easier to compute that classically before feeding into the neural network?

The NEF (Neural Engineering Framework) was developed with the design philosophy of implementing as much of the computation in the spiking neural network, with the eventual goal of implementing entire systems (end-to-end) wholly within a spiking neural network. Spaun for example is a vision-to-cognition-to-motor system completely implemented in a spiking neural network. For such networks, the ability to compute decoders to absolutely crucial since training a network of that size from scratch is close to impossible. For hardware implementations the NEF gives us the ability to implement arbitrary computations in SNN so that we can take full advantage that SNN in hardware provides.

However, if you do not have these restrictions in mind with your network, then using the NEF to compute decoders that approximate any function is not necessary.

If we want to compute a function that is hard to define, say a function that performs edge detection on the input how are these decoder weights specified? I should specify the function it wants to compute?, f(x) = x or some other function?

The beauty of Nengo is that you can give any arbitrary python function to the decoder solver when creating a connection. What Nengo does is to use this python function to create a mapping between a set of inputs (randomly generated) to the corresponding set of outputs (determined by the logic of the given function). This mapping is then used to solve for the ensemble’s decoders. This method can be used to approximate the edge detection functionality, but I haven’t tried it myself, so I can’t comment on the effectiveness of such an approach.

For more complex functions, it is typically more helpful to break down the function into simpler components. For your example of edge detection, you could try applying a filter (e.g., a Gabor filter - see Nengo example here), and then performing a function on the output of the filters (i.e., the feature vector).

When training the SNN, what happens to these decoder and encoders?

In NengoDL, you can specify which Nengo object to train. Only objects with a trainable=True attribute will have their properties modified during the training process.

Also, say I have an input 8x8-pixel Image as an input to an SNN. I understand I cant use this directly, so I flattened it to a 1x64 vector. The first SNN layer has 64 neurons. I want the ith neuron to receive as its single input, only the ith pixel in the 1x64 input vector. It’s a one-to-one connection. what would be the parameters in nengo.connection

You can achieve this by modifying the encoders of the ensemble. Here’s how you would do it:

import nengo
import numpy as np
from nengo.dists import Samples

d = 64
n = d
eye = np.eye(d)

with nengo.Network() as model:
    ens = nengo.Ensemble(n, d, encoders=Samples(eye))

# Note, you can use the `Choice` distribution here as well. `Sample` chooses the rows in order (it cycles when it reaches the end of the list), while `Choice` chooses the rows using a normal distribution.

I should note that the bias and gains for each neuron in the ensemble are chosen at random. If want to fix the neuron response curves to make them all identical, you should do:

ens = nengo.Ensemble(d, d, encoders=Samples(eye), intercepts=Choice([0]),
                     max_rates=Choice([200]))

Alternatively, I wanted to modify the currents on each neuron in the SNN layer directly, but I’m not sure how to do that.

Can you elaborate on what you want to do with the neuron currents? Do you want to inject a constant bias, or to modify the currents on the fly while the simulation is running?

1 Like

Hello xchoo,

thank you very much for being patient with my questions. Your replies are very clear to understand.

Can you elaborate on what you want to do with the neuron currents? Do you want to inject a constant bias, or to modify the currents on the fly while the simulation is running?

Yes, I want to inject a constant bias. I think it should be possible by just setting the bias, right? I have been using the bias and gain, instead of max_rates and intercept. I wanted each neuron to receive a bias current proportional to their respective pixel input. But, in the long term , during training, I think I should do as you suggested:

This is a very informative discussion! Thanks for the questions and explanations. I am working on a similar thing where I have a single Ensemble with a recurrent “direct” connection (not to be confused with the Direct neuron type) based on a weight transform. It’s fairly large, and I am interested in performance.

Even if one is not using the NEF encoders, they are still being calculated at run time


[This code means schedule an operation (add_op) to take the dot product (DotInc) of the ensemble’s encoders and the ensemble’s NEF input (model.sig[ens]["in"]), then project it to the ensemble’s neuron inputs (model.sig[ens.neurons]["in"]). It is the code that does one of the equations above.]

When you have inhibition or training or a direct weight transform, model.sig[ens]["in"] is always zero, yet this dot product is still being computed. I wonder if there’s a way to avoid calculating the NEF encoders for an Ensemble that is just using “direct” connections. Is there some kind of raw Ensemble object that can just do the neuron nonlinearities, or is there some way to set encoders to avoid the operation?

Yup! You can set a bias current by modifying the bias and gain values. Using nengo.Node is an alternative way (in my opinion, a more flexible way) to introduce bias currents to neurons:

ens = nengo.Ensemble(4, d)
bias_node = nengo.Node(1)

bias_transform = [[b0], [b1], [b2], [b3]]  # bN is the bias values for the Nth neuron

nengo.Connection(bias_node, ens.neurons, transform=bias_transform)

Yes. You are correct. Unfortunately, in the current implementation of nengo.Ensemble in Nengo core, the encoder computation is done regardless of whether or not they are in use. We are aware of this, and are planning to change the Nengo API to facilitate this kind of network configuration in the future. These changes are planned to be incorporated into the Nengo 4.0 release.

I should note that this feature is already available in the current NengoDL release. If you are using NengoDL, it automatically removes any operators that do not add any effective computation to the operator graph.

Perfect! I am using Nengo OCL and trying to transition to Nengo DL. It’s great to know that it does this already. It is also interesting to see this move to first class low-level neural network concepts in Nengo 4.0. I definitely support that – it’s a good simulator even outside the NEF.

When using Nengo core, I imagine that performance is not usually the top priority; however, many nengo backends post-process the Nengo core build results, so it will be beneficial for Nengo 4.0 to have the option to avoid calculating them in the first place.

For reference, I commented out that add_op in nengo while using nengo OCL, and build time dropped by 72%

I have tried using your method:

I get the same result if I use direct connections

snn1 = nengo.Ensemble(n_neurons=64, dimensions=64, gain=gaineff, bias=biaseff,
neuron_type = nengo.LIF(tau_rc = taurc, tau_ref = tref, min_voltage = 0))
conn1 = nengo.Connection(pixelinput, snn1.neurons)

I am trying to learn both methods.
However, I noticed that for the 64 neuron layer, I cannot display more than 5 neuron voltages, but I can display all 64 spikes. Is this right or is there something wrong with my connections/setup?!!

Hmm. No. That is not normal. I am able to replicate the issue on my system as well, so it seems to be a bug in the code. Once I track down where it is occuring, i’ll make a bug report on the appropriate code repository.

So, the dev team got back to me. Apparently, it’s a known issue with the current iteration of NengoGUI. There is a way to display more than 5 neurons on the voltage graph, and that is to edit the *.cfg file that is found in the same folder as your script. If your script is called myscript.py, the configuration file would be called myscript.py.cfg.

Inside the config file, look for a nengo_gui.components.Voltages entry, and modify it like so:
The default value should be:

_viz_2 = nengo_gui.components.Voltage(ens)  # Note: _viz_2 might be called something else

and edit it to:

_viz_2 = nengo_gui.components.Voltage(ens, n_neurons=n_neurons_to_show)

Save the config file, then reload your script in the GUI. Next, update the range of the voltage plot to include all of the neurons you wish to display. I should note that any changes made to the layout in the GUI will overwrite this custom setting, so you may have to re-modify the file when any changes are made to the network layout.

We are working on a new version of NengoGUI (it is a rewrite from the ground up), and this issue will be addressed then.

Hello @iampaulq ,
Can you tell me how to display image input as you when using nengo-gui?

Hi @ntquyen2,

This example script demonstrates how to display images in NengoGUI:

You’ll need to install the NengoExtras repository to use this code. NengoExtras should be installable using pip install nengo-extras