I have a question about one of basic principle of NEF, representation.
I can’t understand decoding process which minimize MSE between input signal and decoded signal.
Defining MSE is for finding optimal decoders. Is it right?
Is MSE defined between signals at each time t?
For example, input signal is x(t) = sin(t). Then x(π/6) = 1/2 for t = π/6, and x(π/2) = 1 for t = π/2.
So t = π/6, decoding weight is changed by minimizing MSE at that time, and then t = π/2, decoding weight is changed by minimizing MSE at that time?
The decoders are static, they don’t change over time (unless you are using an online learning rule). The decoder optimization works by sampling a bunch of points across the representation space (e.g., points from -1 to 1), and then optimizing (minimizing the MSE) across all those points. Then when you have a time-varying input like sin(t) the values will be moving around within that -1 to 1 space, but we already covered that whole space during the initial optimization so the decoders don’t need to be updated as the signal changes.
So, you mean that if I set the radius parameter to 1 when I define an ensemble A, the neurons of the ensemble A are optimized for decoding any value in that range(from -1 to 1) before starting the simulation. OK?
And I have additional questions.
model = nengo.Network()
A = nengo.Ensemble(50, dimensions=16)
Why n_eval_points is None?
For example, for dimensions = 2, eval_points are inside of a unit circle not near the surface. Is this correct?
(I know the parameter surface(True/False) specifies whether including the surface of the UnitHyperSphere or not.)