Nengo SPA Different Results In Repeated Tests And Neuron Types Used

Greetings,

I am using SPA to build a biologically plausible knowledge representation network but I have some questions.

1st Question

First of all I am using an autoassociative thresholding memory to store pointers of the form

A = C1 * K1 + C2 * N1;
B = C1 * (K2 + N2) + C2 * K3;

which I query as following:

vocab['A'] * ~ vocab['C1'] >> model.memory

Every time I rerun my query though I get a different result. Usually it is the correct one but once in a while it can be wrong. In the case of using the second pointer B, which must return the sum (K2 + N2), the errors are more frequent.

If I can hazard a guess I would say that this is due to the fact that semantic pointers get instantiated randomly so that is why every time I get different results, but why do I get errors too?

An example can be seen in the following graphs. In the first two you can see the variation I am talking about and in the next two you can see errors, as the correct green and orange lines are mistakenly lower in the similarity measurements.



2nd Question

When I query the memory like this

vocab['A'] * ~ vocab['C1'] >> model.memory

the model runs very fast, but when I tried the query like this

first = spa.Transcode('A', output_vocab=terms)
second = spa.Transcode('C1', output_vocab=terms)

term = spa.State(vocab, neurons_per_dimension = 5)
query = spa.State(vocab, neurons_per_dimension = 5)

nengo.Connection(first.output, term.input)
nengo.Connection(second.output, query.input)

term * ~query >> model.memory

the model runs incredibly slow. Why? What is the difference?

3rd Question

When using standard Nengo you can choose the type of neurons and synapses you use but I can’t find any option on how to see and also change what kind of neurons and synapses I use when I model with the SPA. What is the default neuron types used in SPA?

There are various sources of randomness in a SPA model. There’s randomness in the SPA vocabulary, as you suggest. There’s also randomness in the neuron parameters (e.g. gains, biases, encoders). All of those will contribute to variability in performance if you run the model multiple times (including getting the answer wrong sometimes). A lot of the time that is desired behaviour when you’re trying to model a biological system, as it reflects the fact that results are not deterministic (and choosing appropriate ranges for your random variables can mimic the variability observed in biology).

But if you’d like to avoid that randomness, you can control the random seeds for all those different components. For example, you can set the seed parameter on a nengo.Network or nengo_spa.Network.

You can also decrease the effects of the randomness by modifying different parameters of your model. For example, increasing the dimensionality of your SPA vocabulary vectors will tend to make them more distinct, and therefore less likely to be accidentally confused with each other as you’re seeing. Or increasing the number of neurons you’re using to represent those vocabulary vectors will decrease the variability in that neural representation (making the results more consistent).

The first example there isn’t actually representing the inputs in neurons, it is just computing abstract mathematical operations on vocabulary vectors. In the second case you’re actually using simulated populations of neurons (in the spa.State networks) to represent A and C1 and compute the mathematical transformations, which is why it is more computationally intensive.

You can change the defaults using the Nengo config system. For example,

with nengo.Network() as net:
    net.config[nengo.Ensemble].neuron_type = nengo.LIFRate(...)
    term = spa.State(...)

would cause all the neurons in term to have the LIFRate neuron type. The SPA module tries to abstract away a lot of those underlying details, which is why they aren’t as easy to see/manipulate as in a regular Nengo model. If you do need more fine-grained control over those parameters, you might find it easier to build your model directly, rather than using SPA.

1 Like

So given that I want my model to be biologically plausible the best way to query the memory is by using the next piece of code?

first = spa.Transcode('A', output_vocab=terms)
second = spa.Transcode('C1', output_vocab=terms)

term = spa.State(vocab, neurons_per_dimension = 5)
query = spa.State(vocab, neurons_per_dimension = 5)

nengo.Connection(first.output, term.input)
nengo.Connection(second.output, query.input)

term * ~query >> model.memory

Given your answer is there a similar difference regarding abstract mathematical operations and actual neural implementation when using the operator * as opposed to the CircularConvolution network? Should I construct all my pointers using the network?

It depends on what you are trying to model. If representing A and C1 neurally is important, then yes you would want to use that approach. If you just want to focus on, e.g., the memory component, then it may not matter to your model whether A and C1 are represented neurally or not. There will always be some parts of your model that are more abstract and some that are more biologically detailed. Biological plausibility is a research problem that depends on what the questions are you want to ask of your model.

The important distinction is not so much between using * vs CircularConvolution, it’s what the inputs are to your computation. In this code

vocab['A'] * ~ vocab['C1']

your inputs are just vocabulary vectors, not neural populations. In this code

term * ~query

your inputs are neural populations (spa.State). So you can use * in either case, but it will be different depending on what the inputs are.

1 Like

To add to @drasmuss’s answer to your third question: You cannot only change the underlying Nengo Core parameters this way (such as the neuron type), but also SPA specific parameters such as the number of neurons per dimension. The documentation has an example of this. Or an even more extensive example.

1 Like