Unexpected value for scaled encoders

Hi all,

In order to be able to recreate a previously trained ensemble, one has to use the encoder values, and not the scaled_encoders, from an ensemble at the end of a training stage to initialize a identically behaving ensemble (explained in NengoDL simulator provides different results in training network - #4 by xchoo). The NengoDL simulator provides a function to get these parameters, aptly named get_nengo_params, but the Nengo Core simulator does not. I am using the nengo_dl.simulator.get_nengo_params function as a template for a function so that allows me to later create an Ensemble with the same scaled_encoders as that of a previously trained ensemble.

Instead of starting with a trained network, I just start with a model that has a single ensemble

I have the following code:

import numpy as np
import nengo
from nengo.dists import Uniform

def calculate_params(simulator, ens, s_encoders):
    # calculate the "unscaled" encoders
    # based on the `get_nengo_params` func from the nengoDL simulator

    gain = sim.model.params[ens].gain
    radius = ens.radius

    encoders = s_encoders * radius / gain[:, None] 

    return encoders, gain, radius


n_neurons = 2
Dim = 1
seed = 1

model = nengo.Network()

intercepts = [0,0]
encoders = np.array([[.5],[.3]])

with model:

    A = nengo.Ensemble(
        n_neurons,
        dimensions=Dim,
        intercepts=intercepts,
        encoders=encoders,
        seed=seed
    )

    probe_encoders = nengo.Probe(A, 'scaled_encoders')

sim = nengo.Simulator(model, progress_bar=False)

sim.run(0.001)

scaled_encoders = sim.data[probe_encoders][0]

encoders_calculated, gain, radius = calculate_params(sim, A, scaled_encoders)

print(f"initalized encoders: \n{encoders}")

print(f"radius: {radius}")

print(f"gain: {gain}")

print(f"probed scaled_encoders: \n{scaled_encoders}")

print(f"calculated encoders: \n{encoders_calculated}")

The output is:

initalized encoders:
[[0.5]
 [0.3]]
radius: 1.0
gain: [29.76470771 36.98730496]
probed scaled_encoders:
[[29.76470771]
 [36.98730496]]
calculated encoders:
[[1.]
 [1.]]

As you see, the initialized encoders and calculated encoders are not the same. However, the values for the calculated encoders are correct given the value for the scaled encoders of the ensemble A, and the gain values.

The problem is then that is look to me like the scaled_encoder values are not correct. Given how scaled_encoders are calculated during the build process, I would expect these to be

[[0.5 * 29.76470771]
[0.3 * 36.98730496]]

How the scaled encoders are now, it looks like the nengo simulator acts as if the (not scaled) encoders are set as [[1.],[1.]], while I explicitly set them to [[0.5], [0.3]]

So my question is: Am I misunderstanding how scaled encoders are calculated? What is going on?

Apparently 1D neurons do not really have encoders. The method, sort of, works for higher dimension neurons (which I am using for my actual project).

setting Dim=2 and encoders = np.array([[.5,1],[-1,.3]])

The output is:

initalized encoders:
[[ 0.5  1. ]
 [-1.   0.3]]
radius: 1.0
gain: [ 7.48478721 12.16548901]
probed scaled_encoders:
[[  3.3472986    6.6945972 ]
 [-11.65242515   3.49572754]]
calculated encoders:
[[ 0.4472136   0.89442719]
 [-0.95782629  0.28734789]]

Now the initialized and calculated encoders are rather similar, but not exactly the same, why not?

Comparing the scaled encoders of A and another ensemble that is created using the calculated encoders, the actual scaled_ecoders are the same (sometimes different at the 8th or so decimal, but that is the same in my book), which is what matter after all.

Hi @ChielWijs,

If you look at the Nengo code, by default, Nengo will normalize all encoder values provided to an ensemble when it is created. You can turn off this feature by specifying normalize_encoders=False when creating the ensemble.

As an example, if I ran your code with normalize_encoders=False, this is the output I get:

initalized encoders:
[[0.5]
 [0.3]]
radius: 1.0
gain: [16.9719751   6.58907043]
probed scaled_encoders:
[[8.48598755]
 [1.97672113]]
calculated encoders:
[[0.5]
 [0.3]]

The reason Nengo does this is to avoid unexpected behaviour (unexpected from the user’s perspective) when using non-normalized decoders. If you look at the equation Nengo uses to calculate the input current to a neuron:

J(\mathbf{x}) = \alpha (\mathbf{e} \cdot \mathbf{x}) + J^{bias}

You’ll see that there is a dot product between the input vector \mathbf{x} and the neuron’s encoder \mathbf{e}. If the encoder is not of unit magnitude (i.e., the L2 norm is not 1), this dot product will throw in a scaling factor that may be unexpected by the user. Thus, Nengo normalizes all encoder values before building the ensemble. If you look at the outputs of your 2D example, you’ll notice that this is what is happening too. Normalizing [0.5, 1] gives you [0.447, 0.894], and normalizing [-1, 0.3] gives you [-0.958, 0.287].

As to your comment:

1D neurons actually do have encoders! In 3D, normalized encoders are located on the surface of a unit sphere. In 2D, normalized encoders are located on the circumference of a unit circle. And in 1D, normalized encoders are located on the ends of a unit line, with the two possible encoder values being 1, and -1. :smiley:

1 Like

Thanks a lot for the clear explanation!