How to determine optimal neruon encoder and bias?

I’m confused about the setting of neuron encoder and bias.
I’ve go throught some examples, neuron encoder is explicitely set as something like this:
encoders = [[1], [-1]]
I know this can make neurons response to positive input or negative input.
In some cases, if encoder is not set, I looked at the source code, encoder will be UniformHypersphere by default.

In nengo dl tutorial(, it says encoder can also be optimized. But I can’t find how nengo dl optimize the encoder.

In addition, A Technical Overview of the Neural Engineering Framework says decoder can be obtained by least-squares minimization, however encoder is randomly determined in this case.

I also noticed there is a tutorial “Improving function approximation by adjusting tuning curves”. But this seems like getting a better tuning curve by observation.

So my questions are:

  1. what does UniformHypersphere mean? It seems like encoders value are uniformly distributed.
  2. I’d like to encode time-varing input as spike trains, I hope output spike trains can reflect input accurately.
    How do I determine optimal encoder and bias? input distrubution and statistics are already known, is there any way to get optimal encoder/bias analytically or by some optimization methods?
  3. Is it possible to get encoder by least square optimization if I can fix all decoder value to 1? Since I need the spike train for future processing, I don’t care about the decoder value. My objective is to find encoder/bias that can make the spike trains as distinguishable as possible if provide different input.

Exactly, the encoders will be uniformly distributed on the surface of a hyper-sphere (the generalization of a sphere to higher dimensional spaces).

Note that Nengo will determine the gain and bias from the intercepts and maximum firing rates. Alternatively, the you can set the gain and bias which implicitly defines the intercepts and maximum firing rates. I don’t think you can set both the encoders and bias (but I might be wrong).

The optimal encoders depend on the statistics of your input and the output function you are computing (and maybe even other properties of your ensemble?). I don’t think there is a general answer to that question. Also, there might be different ways of defining optimal or quantifying how good certain encoder are … But I did some work on the optimal encoders for multiplication.

My first idea in that case is that the encoders should align with the direction of variation of the input (then the neuron output would vary exactly with change in that direction, whereas with encoders orthogonal to that direction the neuron output would not vary at all). But that assumes that there is a major axis of variation in your input … which might not be the case depending on your input statistics.

1 Like

NengoDL will optimize the encoders using the sim.train function, applying whatever optimization algorithm is specified there. There isn’t really any distinction between encoders, decoders, or connection weights when you are optimizing in this way, the same optimization procedure is being applied to all of them (although with different values). So you don’t need to do anything special to optimize the encoders, it will just happen as part of the overall optimization (it is occurring in that example you linked).

You can set the encoders and biases independently, it’s the biases and intercepts that are linked (so you can only set one or the other).

And yes I would agree with Jan’s suggestion: if you just want the spike trains to be maximally distinguishable, I would try aligning the encoders with the variability in your data (which you might discover with e.g. SVD or PCA).

You might also be interested in the Voja/Oja learning rules, which will adjust encoders in an unsupervised manner using online learning (see this example

Yes, you’re right!