Use of different vocabularies in SPA together with binding

Dear all
in older versions of the SPA I used different vocabs or subvocabs in order to separate S-pointers which were activated in different state buffers.
In the current SPA version (import nengo_spa as spa) I run in trouble when I try to use binding together with different vocabs.

Is it still possible in the current spa version to define subvocabs (or better different vocabs in one module) like done here:


here: define ony one vocab (and perhabs subvocabs!)

dimensions = 64 # 512
vocab = spa.Vocabulary(dimensions)

words_visu_in = [‘V_Ball’, ‘V_Auto’, ‘V_rot’, ‘V_blau’]
words_concepts = [‘K_Ball’, ‘K_Auto’, ‘K_rot’, ‘K_blau’]
words_audi_in = [‘A_Objekt’, ‘A_Farbe’]
words_motor_out = [‘M_Ball’, ‘M_Auto’, ‘M_rot’, ‘M_blau’]
words_control = [‘NEUTRAL’, ‘V_IN’, ‘A_IN’, ‘GEN_INFO’, ‘GEN_ERG’, ‘SEL_KON’]


for w1 in words_visu_in:
for w2 in words_motor_out:
for w3 in words_audi_in:
for w4 in words_control:
for w5 in words_concepts:

vocab.add(‘Obj_Ball’, vocab.parse(‘A_Objekt * V_Ball’))
vocab.add(‘Obj_Auto’, vocab.parse(‘A_Objekt * V_Auto’))
vocab.add(‘Farbe_rot’, vocab.parse(‘A_Farbe * V_rot’))
vocab.add(‘Farbe_blau’, vocab.parse(‘A_Farbe * V_blau’))


vocab_visu_in = vocab.create_subset(words_visu_in)
vocab_motor_out = vocab.create_subset(words_motor_out)
vocab_audi_in = vocab.create_subset(words_audi_in)
vocab_control = vocab.create_subset(words_control)
vocab_concepts = vocab.create_subset(words_concepts)
vocab_info = vocab.create_subset([‘Obj_Ball’, ‘Obj_Auto’, ‘Farbe_rot’, ‘Farbe_blau’])


What I want is, to define vocabs for these keys (in order to do the convolution, i.e., the binding and unbinding processes with defined separate vocabs):

vocab_syll_mem_keys = [‘BAP’, ‘GUP’, ‘DIP’, ‘NONE’]

vocab_pos_keys = [‘Pos01’, ‘Pos02’, ‘Pos03’, ‘Pos04’]

vocab_conv_syll_pos_keys = [‘BAP * Pos01’, ‘GUP * Pos02’, ‘DIP * Pos03’, ‘BAP * Pos04’]

Kind regards

Hi @bernd,

Yes, you are able to define sub-vocabularies in NengoSPA. The function call is the same as with

To do this, I’d define a “master” vocabulary that contains all of the semantic pointers (of the same dimensionality) used in your network, then create sub-vocabularies for each of the different groups. You can see that I do this with the Spaun model. Here is where I create the “master” vocabulary, and here is where I create all of the sub-vocabularies.

Hi Xuan
thank you for that help
that works so far.
But I run in trouble in case of bindings :
If I have:

syll_in * pos_in >> conv_syll_pos

and I have defined a subvocab and for syll_in s-pointers and for pos_in s-pointers.

pos_in = spa.Transcode(pos_input, output_vocab=vocab_pos)
syll_in = spa.Transcode(syll_input, output_vocab=vocab_syll_mem)


pos_in = State(
    subdimensions=subdims, neurons_per_dimension=n_neurons,
syll_in = State(
    subdimensions=subdims, neurons_per_dimension=n_neurons,

conv_syll_pos = State(
    subdimensions=4, neurons_per_dimension=n_neurons,
    feedback=1.0, feedback_synapse=0.4,

How to define the vocab (the subvocab) for conv_syll_pos (because vocab=D does not work) ?

I get the error message for the convolution:

SpaTypeError: Different vocabularies: TVocabulary<256-dimensional vocab at 0x7fa2dab76fa0>, TVocabulary<256-dimensional vocab at 0x7fa2dab99310>


PS.: here in addition the definition of the vocabs

master vocab

vocab_master = Vocabulary(D, strict=False)


vocab_action_control = vocab_master.create_subset(keys=vocab_action_control_keys)
vocab_active_syll = vocab_master.create_subset(keys=vocab_active_syllable_keys)
vocab_pos = vocab_master.create_subset(keys=vocab_pos_keys)
vocab_syll_mem = vocab_master.create_subset(keys=vocab_syll_mem_keys)
vocab_syll_phrase = vocab_master.create_subset(keys=vocab_syll_phrase_keys)
vocab_visual_in = vocab_master.create_subset(keys=vocab_visual_in_keys)

and if I add something like:

convolution subvocab:

convolution subvocab:

vocab_conv_syll_pos = Vocabulary(D, strict=False)
for pos in vocab_syll_mem_keys:
for syll in vocab_pos_keys:
sp_str_name = ‘%s_%s’ % (syll, pos)
sp_str_conv = ‘%s*%s’ % (syll, pos)
vocab_conv_syll_pos.add(sp_str_name, vocab_master.parse(sp_str_conv))

I get the error message:

ValidationError: Vocabulary.: Cannot add a semantic pointer that belongs to a different vocabulary or algebra.


you have to tell Nengo SPA explicitly how SPs from one vocabulary should be translated to another one. This is the relevant documentation.. You probably want to use spa.reinterpret in this case.

Hi Jan
thank you for that idea.
I used a different way, which has been suggested by Xuan above:
define vocab_master and subvocabs from that vocab_master.
Then I do not need to reinterpret, I think.

But now I run in trouble, because I can not use the name A*B for an s-pointer,
just something like A_B … as name for vocab_master.parse(A * B)

I used something like:

convolution subvocab:

for pos in vocab_syll_mem_keys:
for syll in vocab_pos_keys:
sp_str_name = ‘%s_%s’ % (syll, pos)
sp_str_conv = ‘%s*%s’ % (syll, pos)
vocab_master.add(sp_str_name, vocab_master.parse(sp_str_conv))

Is it possible in nengo_spa to use names like A*B ?
(It seems to work in some other versions like

Kind regards

No… unfortunately, in NengoSPA, it’s no longer possible to do this. In NengoSPA, you can reference semantic pointer symbols in your code by doing this:




This is useful if you want to insert them directly in the SPA actions, as is done in this example.

However, the side-effect of this feature is that the semantic pointer symbol needs to be a valid python variable name (since Python will try to interpret the full symbol as Python code). So, something like this


won’t work, because Python will try to interpret it as spa.sym.A (an object) multiplied with B (another object), instead of the desired SPA symbol.

Other restrictions include any arithmetic symbols (+, -, *, /), parenthesis or brackets, periods, quotation marks, etc.

Dear Xuan
thank you for your response.
I was just wondering because you are using names like A*B it in your Spaun source code:

############ Enumerated vocabulary definitions

    # --- Enumerated vocabulary, enumerates all possible combinations of
    #     position and item vectors (for debug purposes)
    self.enum = Vocabulary(self.sp_dim, rng=rng)
    for pos in self.pos_sp_strs:
        for num in self.num_sp_strs:
            sp_str = '%s*%s' % (pos, num)
            self.enum.add(sp_str, self.main.parse(sp_str))

But no problem for me in the moment, because I replaced the names for bindings with A_B.
… is not thus elegant, because you have to write one line more for s-pointer names and the s-pointer convolution for parsing … :slight_smile:
Kind regards

The Spaun code hasn’t yet been updated for NengoSPA, it was coded using a version that predates the current one. If (or when) it is updated to use NengoSPA, it’ll have to be changed. Personally, I’d probably just replace the '*' with 'x', since the semantic pointer names in Spaun are all capital letters.

1 Like

Hi Xuan
your above post (2/9) was super helpful, because reading the SPAUN code, I became aware how to extend the master-vocab for potential bindings like A * B.
Together with an earlier post concerning enabling and disabling connections by control by SPA-pointers
helped me to develop the whole syllable production model.
I will let you know early versions of the related manuscript for publication in a few months.
(The perhaps only weak point, is that convolutions lead to an s-pointer amplitude not much higher than 0.3 which make the unbinding s-pointers weak as well, and I need these unbinding S-pointers to enable connections using the code you provided me earlier (see the appended graphic: syllable*position and syll_assoc_mem)
… But it runs :slight_smile:
Thank you for all your help,

1 Like