Can you pass multi dimensional data into a 2D Ensemble?

Hello,

https://www.nengo.ai/nengo/examples/basic/2d-representation.html

https://www.nengo.ai/nengo-fpga/examples/notebooks/04-oscillator.html

I am using these two nengo examples as a guide to creating a 2D Ensemble that can be used to represent 2 different inputs, each of specified dimensions, testing with inputs of 16 dimensions. There is a fpgapesensemble that has a feedback and is being used as a memory aggregator, which is being passed Semantic Pointers as multi dimensional vectors from a stimulus. We are trying to connect the input node before the memory aggregator as well as the memory aggregator itself to a 2D Ensemble, but I have run into 2 problems so far.

  1. In the 2D representation example linked above, the inputs are 1 dimensional. When I try to input data that is 16 dimensions into the 2D representation example, I get errors unless the dimensions of the Ensemble is 16 as well. How can I create an Ensemble or Node that can take the multi dimensional data from the stimulus and be able to represent the separate Connections?

  2. Also, the fpgapesensemble does not support indexing, which is used in the example. Can I use a fpgapesensemble as an object with 2 separate inputs of multiple dimensions each?

Here are pictures of the nengogui representation of the network to help explain better what we are trying to accomplish. The normal ensemble labeled anom_ens is the location I want to create the 2 Dimensional Ensemble to represent the two incoming connections, and the inputs from the encoder are 16 dimensional vectors.
nengoforum1

Let me know what more information you need in order to advise me on how to create this type of model.

Thanks
Daniel

Hi Daniel and thanks for looking into Nengo.

The way Nengo works is that it expects a flattened input. This means you can’t have input[0] be one 16D vector and input[1] be another 16D vector. The typical way around this is to have two different ensembles with each having a 16D input and a 1D output.

Now for the nengofpga ensembles you can’t do this, so one work around would be to stack your inputs into a 32D vector and adjust your encoders so that half of your neurons are sensitive to the first 16 dimenions and the other half to the second 16 dimensions. The simple way would be to zero out the encoders for the first and last 16 dimensions for your respective neuron groups in your ensemble. However, you may run into some representational issues depending how you sample your encoders since you want to represent 16D inputs, not 32D ones.

One thing that might fix this is if you sample your encoders from a ScatteredHypersphere for a 16D input for each half of your ensemble and stack them. You’ll still need to append/prepend zeros so that your final encoder array will be (n_neurons, 32). The difference from sampling for a 32D vector and zeroing out dimensions is that you’re sampling a 16D hypersphere so you’ll have a better representation of your input. Then you can add zeros to ignore the respective dimensions.

As a quick reference you would do something like…

from nengolib.stats import ScatteredHypersphere
# create encoders for a 16D input
encoders_1 = ScatteredHypersphere(surface=True).sample(
                        int(n_neurons/2), 16)
# append 16D of zeros so we don't have any activity on those inputs
encoders_1 = np.hstack((encoders_1, np.zeros((int(n_neurons/2), 16))))
# create another set of encoders for a 16D input
encoders_2 = ScatteredHypersphere(surface=True).sample(
                        int(n_neurons/2), 16)
# prepend zeros so we don't have any activity for the first 16D
encoders_2 = np.hstack((np.zeros((int(n_neurons/2), 16)), encoders_2))
# stack our encoders for our ensemble for the full 32D input
encoders = np.vstack((encoders_1, encoders_2))
2 Likes