I’ve been trying to use the AssociativeMemory objects to understand how semantic memory is chunked and I have a few questions.
It seems that there is one Ensemble of 50 neurons for each item in the vocabulary no matter what the dimensions of the vocabulary item.
I don’t understand why the number of assemblies shouldn’t increase with the dimension.
I’ve made a structure with the feed-forward AM objects:
- Honda_company_country; Toyota_company_country
- Hyundai_model_company, etc.
#1 would be
input_keys = [''CAMRY', 'RAV4'],
output_keys = ['TOYOTA', 'TOYOTA']
#2 would be
input_keys = ['ACCORD','CIVIC'],
output_keys = ['HONDA', 'HONDA']
I’ve experimented with connecting each AM object to itself and putting noise in the system.
What I don’t understand is why every time I add another AM object to the system, the transforms that allowed information flow before the extra object are no longer sufficient.