AssociativeMemory & chunking


Hi -

I’ve been trying to use the AssociativeMemory objects to understand how semantic memory is chunked and I have a few questions.

It seems that there is one Ensemble of 50 neurons for each item in the vocabulary no matter what the dimensions of the vocabulary item.

Question #1

I don’t understand why the number of assemblies shouldn’t increase with the dimension.

I’ve made a structure with the feed-forward AM objects:

  1. Toyota_model_company
  2. Honda_model_company
  3. Honda_company_country; Toyota_company_country
  4. Hyundai_model_company, etc.

#1 would be input_keys = [''CAMRY', 'RAV4'], output_keys = ['TOYOTA', 'TOYOTA']
#2 would be input_keys = ['ACCORD','CIVIC'], output_keys = ['HONDA', 'HONDA']

I’ve experimented with connecting each AM object to itself and putting noise in the system.

Question #2

What I don’t understand is why every time I add another AM object to the system, the transforms that allowed information flow before the extra object are no longer sufficient.




  1. You mean the dimensionality of the SPA vectors and not the number of vocabulary entries in your keys or values
  2. “assemblies” is referring to the N ensembles inside of an ensemble array

Let me ask you a few questions in return to make sure I understand your confusion before I try to answer your question:

  1. What do you think each assembly represents?
  2. Given this understanding of assemblies, why do you expect them to grow with the number of vocabularies?

I’m not sure I understand. Could you say this in a different way to help me understand. What is an “AM object”? What “transform” allowed “information flow” and what do you mean by “sufficient”?


Hi –

I’ve put a detailed response and code in the 2017 brain camp repository. The most generic question is:

  1. I want to create a nengo model of the following kinds of table:

model company country continent

RAV4 Toyota Japan Asia
CAMRY Toyota Japan Asia
ACCORD Honda Japan Asia
CIVIC Honda Japan Asia
SONATA Kia Korea Asia
VOLT Chevy US North America

where query models would allow one to enter CIVIC and get out Honda, Japan, and Asia.

The code I placed in the github brain camp repository tries to solve this with binding and spa.State objects, but is only able to retain one pair at a time.

The other attempt is with the AssociativeMemory object. I think there is a lot ‘under the hood’ for this object so that what one sees in the gui including neuronal number and spiking pattern ? is not truly representative of what is going on. The code will sort of solve this, but it requires me to keep increasing the transform weight whenever I add another AssociativeMemory object (the doc in github tries to be specific).