AssociativeMemory & chunking

Hi -

I’ve been trying to use the AssociativeMemory objects to understand how semantic memory is chunked and I have a few questions.

It seems that there is one Ensemble of 50 neurons for each item in the vocabulary no matter what the dimensions of the vocabulary item.

Question #1

I don’t understand why the number of assemblies shouldn’t increase with the dimension.

I’ve made a structure with the feed-forward AM objects:

  1. Toyota_model_company
  2. Honda_model_company
  3. Honda_company_country; Toyota_company_country
  4. Hyundai_model_company, etc.

#1 would be input_keys = [''CAMRY', 'RAV4'], output_keys = ['TOYOTA', 'TOYOTA']
#2 would be input_keys = ['ACCORD','CIVIC'], output_keys = ['HONDA', 'HONDA']

I’ve experimented with connecting each AM object to itself and putting noise in the system.

Question #2

What I don’t understand is why every time I add another AM object to the system, the transforms that allowed information flow before the extra object are no longer sufficient.

Thanks

Assuming:

  1. You mean the dimensionality of the SPA vectors and not the number of vocabulary entries in your keys or values
  2. “assemblies” is referring to the N ensembles inside of an ensemble array

Let me ask you a few questions in return to make sure I understand your confusion before I try to answer your question:

  1. What do you think each assembly represents?
  2. Given this understanding of assemblies, why do you expect them to grow with the number of vocabularies?

I’m not sure I understand. Could you say this in a different way to help me understand. What is an “AM object”? What “transform” allowed “information flow” and what do you mean by “sufficient”?

Hi –

I’ve put a detailed response and code in the 2017 brain camp repository. The most generic question is:

I want to create a nengo model of the following kinds of table:

Model Company Country Continent
RAV4 Toyota Japan Asia
CAMRY Toyota Japan Asia
ACCORD Honda Japan Asia
SONATA Kia Korea Asia
VOLT Chevy US North America

where query models would allow one to enter CIVIC and get out Honda, Japan, and Asia.

The code I placed in the github brain camp repository tries to solve this with binding and spa.State objects, but is only able to retain one pair at a time.

The other attempt is with the AssociativeMemory object. I think there is a lot ‘under the hood’ for this object so that what one sees in the gui including neuronal number and spiking pattern ? is not truly representative of what is going on. The code will sort of solve this, but it requires me to keep increasing the transform weight whenever I add another AssociativeMemory object (the doc in github tries to be specific).

Thanks

Don’t have the time to look at this right now, but if anyone else wants to take a look at it, the complete code and explanation is in the summerschool2017 repo here.

I intend to reply to this, but also don’t have the time at the moment as this seems to require a more elaborate answer.

Is there a reason you put the code in the Summer School repo? I’ve reformatted it into a Jupyter Notebook. Is it okay if I publish that notebook publicly? Are you trying to to keep this code private?

Also, I’m a bit confused by the inputs and outputs of this model. I understand you’re saving a table of data relations in neurons. How do you want to query this information? Will you only be querying the model or do you want other queries as well?

Publication-wise, you may be interested in Eric Crawford’s work with “Biologically Plausible, Human-Scale Knowledge Representation”.

I only put the code in the Summer School repo to keep the forum uncluttered. Please feel free to put it wherever it might get a good answer.

Thanks

Howard

Hi - I tried to address with using neuronal assemblies borrowing heavily from the AssociativeMemory Learning example.

  1. I want the model to learn associations between car model and car company, and then car company and country of production.
  2. Neuronal assemblies learn these mappings.
  3. The assemblies learn the associations between 4 models and 2 companies, and then between 3 companies and 2 countries.
  4. Learning for these associations occur in separate stages (1st 2 seconds and then 6.5 to 8 sec)
  5. A gate blocks the flow of information between the 1st and 2nd association until learning is complete. The gate is controlled by BG-thal.

This model accomplishes what I wanted the spa.AssociativeObject to do. My goal is trying to understand how semantic memory is chunked. Code written for the gui is below.

import numpy as np
import nengo
import nengo.spa as spa
D = 32

num_items = 5

rng = np.random.RandomState(seed=7)


intercept = 0.08



# Model constants
n_neurons = 2000
dt = 0.001
period = 0.4
T = period*num_items*2


# Model network
model = spa.SPA()
with model:
    
    model.vocab = spa.Vocabulary(D)
    RAV4 = model.vocab.parse('RAV4')
    CAMRY = model.vocab.parse('CAMRY')
    COROLLA = model.vocab.parse('COROLLA')
    ACCORD = model.vocab.parse('ACCORD')
    TOYOTA = model.vocab.parse('TOYOTA')
    HONDA = model.vocab.parse('HONDA')
    JAPAN = model.vocab.parse('JAPAN')
    KOREA = model.vocab.parse('KOREA')
    CLOSED  = model.vocab.parse('CLOSED')
    OPEN = model.vocab.parse('OPEN')
    
    def car_model_input(t):
        if 0 < t < 0.6:
            return 'RAV4'
        elif 0.6 < t < 1.2:
            return 'CAMRY'
        elif 1.2 < t < 2.0:
            
            return 'ACCORD'  
        elif 2 < t < 3.0:
            return 'RAV4'
        elif 4 < t < 5:
            return 'ACCORD'
        elif 13 < t < 14:
            return 'CAMRY'
        else:
            return '0'
            
    def company_input(t):
        if 0 < t < 1.2:
            return 'TOYOTA'
        elif 1.2 < t < 2.0:
            return 'HONDA'
        else:
            return '0'
    model.store_car_model = spa.State(D)
    model.input_car_model = spa.Input(store_car_model = car_model_input)
    
    model.store_company = spa.State(D)
    model.input_company = spa.Input(store_company = company_input)
    
    
    
    learning = nengo.Node(output=lambda t: -int(t>=T/2))
    recall_model_to_company = nengo.Node(size_in=D)
    
# Create the memory
    memory_model_to_company = nengo.Ensemble(n_neurons, D, intercepts=[intercept]*n_neurons)
        
# Learn the encoders/keys
    voja = nengo.Voja(post_tau=None, learning_rate=5e-2)
    conn_in = nengo.Connection(model.store_car_model.output, memory_model_to_company, synapse=None,
        learning_rule_type=voja)
    nengo.Connection(learning, conn_in.learning_rule, synapse=None)


# Learn the decoders/values, initialized to a null function
    conn_out = nengo.Connection(memory_model_to_company, recall_model_to_company, learning_rule_type=nengo.PES(1e-3),
        function=lambda x: np.zeros(D))

# Create the error population
    error = nengo.Ensemble(n_neurons, D)
    nengo.Connection(learning, error.neurons, transform=[[10.0]]*n_neurons,
        synapse=None)

   
# Calculate the error and use it to drive the PES rule
    nengo.Connection(model.store_company.output, error, transform=-1, synapse=None)
    nengo.Connection(recall_model_to_company, error, synapse=None)
    nengo.Connection(error, conn_out.learning_rule)
    
# make assembly to read recall - ?why isn;t reading?
    
    model.read_recall = spa.State(D)
    nengo.Connection(recall_model_to_company, model.read_recall.input, transform = 10)
  



    intercept2 = 0.08 




    
    
    def country_input(t): 
        if 0 < t < 6.5:
            return '0'
        elif 6.5 < t < 7.5:
            return 'JAPAN'
        elif 7.5 < t < 8.0:
            return 'KOREA'
        
        else:
            return '0'
            
    def company_input2(t):
        if 0 < t < 6.5:
            return '0'
        elif 6.5 < t < 7.0:
            return 'TOYOTA'
        elif 7.0 < t < 7.5:
            return 'HONDA'
        elif 7.5 < t < 8.0:
            return 'KIA'
        elif 9.0 < t < 10:
            return 'TOYOTA'
        elif 11 < t < 12:
            return 'KIA'
        else:    
            return '0'
        
            
    model.store_country = spa.State(D, label = 'store_country_state_label')
    model.input_country = spa.Input(store_country = country_input)
    
    model.store_company2 = spa.State(D)
    model.input_company2 = spa.Input(store_company2 = company_input2, label = 'input_company2')
    
    
    
    
    def learn(t):
        if 0 < t < 6.5:
            return -1
        elif 6.5 < t < 8.0:
            return 0
        else:
            return -1
            
    learning2 = nengo.Node(learn, label = 'learning2')
    recall2 = nengo.Node(size_in=D, size_out = D, label = 'recall2')
    
# Create the memory
    memory2_company_to_country = nengo.Ensemble(n_neurons, D, intercepts=[intercept2]*n_neurons)
        
# Learn the encoders/keys
    voja = nengo.Voja(post_tau=None, learning_rate=5e-2)
    conn_in2 = nengo.Connection(model.store_company2.output, memory2_company_to_country, synapse=None,
        learning_rule_type=voja)
    nengo.Connection(learning2, conn_in2.learning_rule, synapse=None)


# Learn the decoders/values, initialized to a null function
    conn_out2 = nengo.Connection(memory2_company_to_country, recall2, learning_rule_type=nengo.PES(1e-3),
        function=lambda x: np.zeros(D))

# Create the error population
    error2 = nengo.Ensemble(n_neurons, D)
    nengo.Connection(learning2, error2.neurons, transform=[[10.0]]*n_neurons,
        synapse=None)

   
# Calculate the error and use it to drive the PES rule
    nengo.Connection(model.store_country.output, error2, transform=-1, synapse=None)
    nengo.Connection(recall2, error2, synapse=None)
    nengo.Connection(error2, conn_out2.learning_rule)
    
# make assembly to read recall
    
    model.read_recall2 = spa.State(D)
    
    
    nengo.Connection(recall2, model.read_recall2.input, transform = 10)
    
    ############################
    model.gate = spa.State(D, feedback = 0)
    
    def control_gate(t):
        if 0 < t < 8:
            return 'CLOSED'
        elif 8 < t < 15:  
            return 'OPEN'
        else:
            return '0'
            
    model.gate_instructions = spa.State(D, feedback = 0)
    
    nengo.Connection(recall_model_to_company, model.gate.input, synapse = 0.01)
    nengo.Connection(model.gate.output, recall_model_to_company, transform = -1)
    nengo.Connection(model.gate.output, memory2_company_to_country, synapse = 0.01)
    
    model.control_gate_action = spa.Actions (
        'dot(gate_instructions, CLOSED)-->',
        'dot(gate_instructions, OPEN)--> ')
    
    model.input_gate_instructions = spa.Input(gate_instructions = control_gate)
            
    model.bg = spa.BasalGanglia(model.control_gate_action)
    model.thal = spa.Thalamus(model.bg)
    
    #ZERO
    for e in model.gate.all_ensembles:
        nengo.Connection(model.thal.output[0], e.neurons,
                         transform= -2.5* np.ones((e.n_neurons, 1)))
    #
    #ONE                
    #for e in model.gate.all_ensembles:
     #   nengo.Connection(model.thal.output[0], e.neurons,
      #                   transform= -2.5* np.ones((e.n_neurons, 1)))

Thanks

Howard

I finally have the time to formulate an answer.

To answer the questions we have to consider how the associative memory works, but it might be even better to start with what we want to achieve in basic linear algebra. To implement associations we want to map each Semantic Pointer in a list (or “vocabulary”) A to some Semantic Pointer in a list B. In other words, we want to project some input vector $a$ to a corresponding output vector $b$. While one could think about more complicated methods, we can use a linear transform, that is a matrix multipication $b = Ma$, due to the fact that the Semantic Pointers in one vocabulary are almost orthogonal. How do we find $M$ for a desired set of associations?

To map the actual vectors, we collect all the vectors of Semantic Pointers into matrices $V_A$ and $V_B$. Each row corresponds to one Semantic Pointer. Given same input vector $a$, $V_a a$ gives us a vector of all the dot products (that is similarities) of the input vector with the semantic pointers in the vocabulary. This basically codes the index of the input vector in the vocabulary. The use the transpose of $V_B$ allows to project this index to the desired output vector of that semantic pointer. Thus, $M = V_B^T V_A$.

The associative memory is essentially an implementation of this equation. On the input connection weights the $V_A$ transform is applied. Each of the resulting dimensions gets fed into a separate ensemble that will threshold the value at some value (default 0.3). This is intended to cleanup noise. Then those thresholded values are projected to the associated output vector with a transform of $V_B^T$ on the output connection weights.

Question 1: The shape of the $V$ matrices is number of Semantic pointers times their dimensionality. By applying the $V_A$ on the input the processing within the associative memory network becomes independent of the Semantic pointer dimensionality, but depends on the number of Semantic Pointers.

Question 2: Looking at parts of your code in the summer school repository, it seems that you are using the wrong vocabulary on the last associative memory. You are using model.vocab_Asian for the input, but the output of the previous associative memory uses model.Toyota_new_vocab_general, so your Semantic Pointers do not match up.

Here is a minimal working example that might be helpful:

import nengo
import nengo.spa as spa


D = 64
with spa.SPA() as model:
    vocab_model = spa.Vocabulary(D)
    vocab_company = spa.Vocabulary(D)
    vocab_country = spa.Vocabulary(D)
    vocab_continent = spa.Vocabulary(D)
    
    vocab_model.parse('RAV4 + CAMRY + ACCORD + SONATA + VOLT')
    vocab_company.parse('Toyota + Honda + Kia + Chevy')
    vocab_country.parse('Asia + NorthAmerica')
    
    model.model2company = spa.AssociativeMemory(
        vocab_model, vocab_company,
        input_keys=['RAV4', 'CAMRY', 'ACCORD', 'SONATA', 'VOLT'],
        output_keys=['Toyota', 'Toyota', 'Honda', 'Kia', 'Chevy'])
    model.company2country = spa.AssociativeMemory(
        vocab_company, vocab_country,
        input_keys=['Toyota', 'Honda', 'Kia', 'Chevy'],
        output_keys=['Japan', 'Japan', 'Korea', 'US'])
    model.country2continent = spa.AssociativeMemory(
        vocab_country, vocab_continent,
        input_keys=['Japan', 'Korea', 'US'],
        output_keys=['Asia', 'Asia', 'NorthAmerica'])
        
    model.state = spa.State(D, vocab=vocab_model)
    nengo.Connection(model.state.output, model.model2company.input)
    nengo.Connection(model.model2company.output, model.company2country.input)
    nengo.Connection(model.company2country.output, model.country2continent.input)

Regarding your last post, is there a specific question? @Seanny123 did some work on learning associations and might be better qualified to comment on that. Though, I also did some work in that direction, but only using a specially derived learning rule (publications on that might be upcoming). It is also not super straight-foward to use that learning rule in other models, though code can be found here and here. I might prepare a pull request to get it included in nengo_extras in the near future to make it more accessible.

1 Like

Jan -

Thanks

Howard