Integrating learning to SPA

Dear all
as far as I know , the learning example I am interested in is the unsupervised_learning_voja_rule example. I have difficulties to transfer this learning example to the SPA . Terry Stewart helped me a little bit but the ipynb example, attached still produces errors.
If you do not want to go in the details of the example:
what I want is: to define a SPA architecture including an associative memory for doing for example this mapping:

# define mapping 
mapping = {
    "ONE": "ORANGE",
    "TWO": "APRICOT",
    "THREE": "CHERRY",
    "FOUR": "STRAWBERRY",
    "FIVE": "APPLE",
}

and what I want to see is, how this mapping could be learned (that means: how to establish the associative memory, representing this mapping by learning).

It would be great, if someone could help me with this .
Thanks in advance,
Bernd
( I am a new user and currently can not upload anything. I want to upload an ipynb-file)

Dear Xuan
is it correct that you now catched this topic in order to help me?
If I shall send you the iPythonNotebook-file on which I already worked a little bit together with Terry, please let me know how I shall upload the file here in the forum (I think I have no allowance to upload anything here) or shall I send it to you by email?
Best
Bernd

For completeness (I responded over email as well), we do provide a Nengo example demonstrating how the Voja rule can be used to learn new vector associations.

Some work will be needed to adapt the Voja rule to natively use the NengoSPA syntax, but it shouldn’t be too difficult.

If you are unable to attach the Jupyter notebook here, you can upload it to Google drive, or dropbox and post the link here. Alternatively, you can send it to me in an email. But, just be aware that it may take some time to full examine your code.

Dear Xuan
the Nengo example you have prompted is the example I was starting with.
This is just the starting point.
It would be helpful for me, now to see how this learning does work in the context of an heteroassociative memory as. defined in the SPA context.
As far as I can see, the learning in this example directly works on a nengo.ensemble. But how can I adapt this example in the SPA context? and how can I display learning results for S-pointers?
Bernd

To do learning on a nengo_spa.State, you need to understand a little bit about the structure of the network. It is not a single ensemble, but actually multiple ensembles. By default the first dimension will be represented by one ensemble, the next 15 dimensions will be one ensemble (the subdimensions parameter - 1), and then every additional 16 dimensions (assuming subdimensions = 16) will be one ensemble each. You can set represent_cc_identity = False to get a simpler split into equally sized ensembles of subdimensions dimensions each. That should be better for learning.

To implement learning on those ensembles, you need to connect the ensembles of the pre and post State with nengo.Connections and the desired learning rule explicitly. You can try to do a 1:1 mapping between the pre and post ensembles, but this will limit the learnable relations to some degree because only blocks of size subdimensions can be correlated. For example, the first 16 dimensions of the input cannot influence any dimensions other than the first 16 dimensions in the output. You can also try to implement full connectivity between all the ensembles, but this will be a lot of connections and might be very costly to simulate. In case you are wondering how to access the ensembles of the State: it has an all_ensembles attribute. You can iterate over all ensembles in it and connect them as desired.

You could also try to put your whole Semantic Pointer into a single ensemble. Either by feeding it into a normal nengo.Ensemble of the appropriate dimensionality or by setting subdimensions = dimensions on the State. Depending on the dimensionality and number of neurons you use, this might however result in a very noisy representation or long build times with a lot of memory consumption.

As you see, there is a number of different ways to go about learning with NengoSPA. They all have trade-offs, so there isn’t really a one-size-fits all solution. For what it might be worth, I did some work on learning associations in my thesis (Chapter 9). Though, I didn’t use Voja, but looked at PES and a new learning rule (“association matrix learning rule (AML)”).

Hi Jan
thank you for your kind and helpful answer.

Nevertheless, for us nengo “users” it would be a great help if you directly could give us an exmaple, how learning can be integrated in case of using an assoc-mem like here in a direct SPA-model:
See:
https://www.nengo.ai/nengo-spa/examples/associative-memory.html

    model.assoc_mem = spa.WTAAssocMem(
        threshold=0.3,
        input_vocab=vocab_numbers,
        mapping=mapping,
        function=lambda x: x > 0.0,
    )

the S-pointers are defined in a SPA-vocab like:

dim = 16
vocab_numbers = spa.Vocabulary(dimensions=dim)
vocab_numbers.populate("ONE; TWO; THREE; FOUR; FIVE")

and the mapping is defined:

mapping = {
    "ONE": "TWO",
    "TWO": "THREE",
    "THREE": "FOUR",
    "FOUR": "FIVE",
}

but initially, the
model.assoc_mem

should not be able, to do the correct mapping.
The correct mapping should result after a learning process for that assoc_mem.

Best
Bernd

I finally prepared an example of how to do learning in NengoSPA. Feedback is welcome.

Incidentally, the example is learning associations. It uses the PES learning rule which requires repeated presentations. Other learning rules might give you better results as already mentioned in the previous posts.

Note that you will have to do the learning between two State modules. You won’t be able to use a WTAAssocMem without any associations and learn these directly in there. The WTAAssocMem only provides a ready built network with some specific structure.

thank you jan