Mathematically, there is no difference between the NEF’s method of using encoders, decoders and a transform with what you refer to as “direct” connections. More specifically, in the fully connected weight, any one element w_{ij} = \alpha_j \mathbf{d}_j \mathbf{e}_i, where:
- \alpha_j is the gain of the j^{th} neuron in B (note, the NEF transform would be rolled into this value)
- \mathbf{d}_j is the decoder for the j^{th} neuron in B
- \mathbf{e}_i is the encoder for the i^{th} neuron in A.
In practice, however, using decoded connections can speed up any computation involving the connection weight matrix. Assuming n neurons in the A population, and m neurons in the B population, using decoded connections results in (on the order of) m + n number of multiplications. However, using a direct connection, you’d have to do m \times n number of multiplications to compute the effect of the equivalent weight matrix.
Using decoded connections seems a bit confusing for me. My understanding is it is the inverse of the encoding process, wherein it reproduces the input, X (or some function of the input, f( x ) to a neuron population from the output spikes?
Yes, the decoding process is essentially the reverse of the encoding process. Consider the following scenario: You have a population of neurons tasked to represent an N-dimensional vector (just pure representation, no function computed).
The encoders serve to project that N-dimensional input into a space that neurons can work with (i.e., scalar current values). Once this is done, however, the information being represented in the ensemble remains in “neuron”-space (i.e., the firing rates of the neurons / or individual spikes from the neurons). In order to reproduce the input vector, you’ll need to project the neuron activities back into the N-dimensional space. This is what the decoders accomplish (by multiplying the neuron activities with a vector + some summation). But! The cool thing about the decoders is that it allows you to project the neuron activities into a space that is completely different from the input space, simply by changing the decoders used. Note that the output space can also be “warped” to approximate functions of the input space. That’s essentially how the NEF “computes” functions with neurons.
=========
I should point out that direct connections are not without it’s uses. We use it quite often to implement inhibitory connections, and when learning rules operate on the connection weight matrix instead of the factored encoder/decoder parts.
=========
As a side note, if you want to construct a “direct” connection in Nengo, it’s possible to do so with:
nengo.Connection(ensemble_a.neurons, ensemble_b.neurons, transform=weight_matrix)