The only difference you should encounter between the two scenarios is the speed at which your simulation runs. But, for the number of neurons you are using (9), there shouldn’t be any noticeable impact between the two scenarios.
However, if you have a lot of neurons, then approach a
would run faster than b
. This is because with a
, the connection weights can be computed as one matrix multiplication operation which is optimized using numpy
to run across multiple CPU cores. However, with b
, instead of one matrix multiplication, you get X independent multiplication operations, which is not optimized to run across multiple CPU cores (so it ends up being slower).
That is correct! When you connect directly to the ensemble (i.e., nengo.Connection(..., ens)
), you are using the ensemble in NEF mode. In NEF mode, the encoder and decoder of the ensemble are applied (i.e., multiplied) to the input & output signals from the ensemble. Thus, the flow of information through an ensemble goes something like this (simplified):
signal input → \times connection weight → synapse applied → \times encoder → neuron non-linearity → \times decoder → signal output
But, if you connect to the .neurons
attribute of the ensemble, it bypasses the multiplication by the encoder and subsequent multiplication by the decoders. So the signal flow looks something like this:
signal input → \times connection weight → synapse applied → neuron non-linearity → signal output