A one2many mapping: is that possible?

Dear all
I have two state maps with different vocabs:

 model.syll_phono = State(   
        subdimensions=subdims, neurons_per_dimension=n_neurons,
        vocab=vocab_syll_phono)

with vocab:
vocab_syll_phono_keys = [‘Pw_St_pap’, ‘Pw_St_tap’, ‘Pw_St_kap’]
and

 model.syll_score = State(   
        subdimensions=subdims, neurons_per_dimension=n_neurons,
        vocab=vocab_syll_score) 

with vocab:
syll_score_keys = [
# pap, tap, kap, …
‘S_bVC’, # 0
‘S_dVC’, # 1
‘S_gVC’, # 2
‘S_CVb’, # 3
‘S_CVd’, # 4
‘S_CVg’, # 5
‘S_CaC’, # 6
‘S_CiC’, # 7
‘S_CuC’, # 8
‘S_PVC’, # 9
‘S_CVP’, # 10
]

and a mapping between them:

mapping_syll_phono2score = {
# pap, tap, kap, …
‘Pw_St_pap’: ‘S_bVC’,
‘Pw_St_pap’: ‘S_CVb’,
‘Pw_St_pap’: ‘S_CaC’,
‘Pw_St_pap’: ‘S_PVC’,
‘Pw_St_pap’: ‘S_CVP’,
‘Pw_St_tap’: ‘S_dVC’,
‘Pw_St_tap’: ‘S_CVb’,
‘Pw_St_tap’: ‘S_CaC’,
‘Pw_St_tap’: ‘S_PVC’,
‘Pw_St_tap’: ‘S_CVP’,
‘Pw_St_kap’: ‘S_gVC’,
‘Pw_St_kap’: ‘S_CVb’,
‘Pw_St_kap’: ‘S_CaC’,
‘Pw_St_kap’: ‘S_PVC’,
‘Pw_St_kap’: ‘S_CVP’,
}

My problem: if I implement a association from syll_phono state map to syll_score state map by using:

    assoc_mem_syll_phono2syll_score = ThresholdingAssocMem(
    input_vocab=vocab_syll_phono, output_vocab=vocab_syll_score, 
    mapping = mapping_syll_phono2score, threshold=0.3, function=lambda x: x > 0.)  

and

    syll_phono >> assoc_mem_syll_phono2syll_score
    assoc_mem_syll_phono2syll_score >> syll_score     

then, I want to get ALL S-pointers in syll_score which are defined in the mapping for syllable /pap/, /tap/ or /kap/

… I will attach an (failing) py code example for this problem

Is it possible to have an additive overlay of these result S-pointers in syll_score state map?

Kind regards
Bernd

Kroeger_one2many.py (5.4 KB)

Hey @bernd,

So… the issue you are experiencing is partly a Python issue, and partly a NengoSPA issue. The gist of the problem is that the mapping parameter for the NengoSPA associative memories currently only support dictionaries, and list of strings. If you provide a dictionary, NengoSPA uses the dictionary to figure out what semantic pointers map to what. If you use a list of strings, it makes and auto-associative memory.

It’s obvious that the option you want is the first option, since you are mapping semantic pointers from one vocabulary to another. But, here’s where the issue with Python comes in. In Python, dictionaries only allow unique keys. That means that the mapping dictionary you defined isn’t actually fully populated. If you print the values in the mapping dictionary right after it is defined you will get this:

{'Pw_St_pap': 'S_bVC', 'Pw_St_tap': 'S_dVC', 'Pw_St_kap': 'S_gVC'}

rather than the full mapping you desire.

I thought about a solution to this, and I have come up with two. The first solution is a “quick and dirty” workaround to the dictionary problem. The idea is to create additional semantic pointer entries in the input dictionary with the same values as the semantic pointers in the input dictionary, but they have different names. What this will allow us to do then, is to have (effectively) the same semantic pointer vector mapped onto different output vectors.

Here’s some example code where each of the original semantic pointers in the input vocabulary is copied 10 times and added back into the input vocabulary:

# Original semantic pointer names
syll_phono_keys = [
    "Pw_St_pap",  # 0
    "Pw_St_tap",  # 1
    "Pw_St_kap",  # 2
]

# Create the vocabulary, and populate with initial semantic pointers
vocab_syll_phono = Vocabulary(dim, strict=False)
vocab_syll_phono.populate(";".join(syll_phono_keys))

# Make copies of the initial semantic pointers and put them into the vocabulary
for phono in syll_phono_keys:
    for i in range(10):
        vocab_syll_phono.add(phono + str(i), vocab_syll_phono[phono])

Now, when we create the mapping, we can do something like this:

mapping_syll_phono2score = {
    # pap, tap, kap, ...
    "Pw_St_pap1": "S_CVb",
    "Pw_St_pap2": "S_CaC",
    "Pw_St_pap3": "S_PVC",
    "Pw_St_pap4": "S_CVP",
    "Pw_St_pap5": "S_bVC",
    "Pw_St_tap1": "S_CVd",
    "Pw_St_tap2": "S_CaC",
    "Pw_St_tap3": "S_PVC",
    "Pw_St_tap4": "S_CVP",
    "Pw_St_tap5": "S_dVC",
    "Pw_St_kap1": "S_CVg",
    "Pw_St_kap2": "S_CaC",
    "Pw_St_kap3": "S_PVC",
    "Pw_St_kap4": "S_CVP",
    "Pw_St_kap5": "S_gVC",
}

You will notice that in the mapping above, instead of Pw_St_pap1 mapping on to multiple outputs, each map pair has a unique key. Since all of the Pw_St_papX semantic pointers have the same vector value, the associative memory should still work, and you can see this in the plots:

Now, this “quick and dirty” approach does have a disadvantage… it clutters the input vocabulary with additional entries, which can make analysis or debugging difficult, especially if you are using this vocabulary elsewhere in your code.

The other approach is to modify the NengoSPA associative memory code to accept a mapping argument that is not a dictionary (e.g., maybe a list of tuples). I’m still working on this, and will update you when the changes have been made and tested (I also found a bug looking at the associative memory code, so I’ll take the opportunity to fix it as well).

1 Like

Dear Xuan
yes that quick solution on dictionary level seems to be a sufficient solution for me currently.
I myself found a related solution by defining five “sub-mappings” and by using 5 associative memories for solving the problem.
But that means that I have to define many more neural buffers and neural connections as may be neurobiologically plausible.
The underlying idea for my whole model brings up this problem because each sound within a syllable needs to be addressed like a syllable itself by using an S-pointer. And that may be neurobiologically not plausible as well. The speech sounds (within the motor programming and motor execution part known as speech actions or speech gestures) should not be addressed by S-pointers but by simple pulses, generated from the syllable oscillators which trigger gestures (modeled as oscillators as well) directly.

I will be back with some more questions concerning the connection between syllable oscillators and speech actions (which are needed to be implemented in a different way as I did it earlier with help from Terry) and with questions on how implementing critically damped oscillators for gestures.
If I can solve that that may be helpful in a way that I do not need one2manny mappings at the SPA level anymore.

Thank you so far for your help.
Kind regards
Bernd