Basal - Thalamus Questions

Hi Everyone,

I’m learning how to use the basal ganglia and thalamus using the nengo_spa.modules. I created a setup with three state inputs that vary the symbol over time (used to create changing input values). the three inputs connect to a basal ganglia module and then to the thalamus and finally to an spa state.

I expected the basal ganglia to respond as a WTA block and the thalamus to suppress the rest of the inputs. Instead, I found the output often had multiple states, and often not the most dominant and occasionally doesn’t even match the input at all.

I found there was a considerable amount of noise on the thalamus signal. I attempted to add more neurons and some filtering. This created some, but little improvement.

I’ve looked for examples on these modules and connecting them together, but haven’t been successful. Can anyone help me better understand what I’m missing or direct me to some examples on using these? I could consider an SPA action as well, but I’m trying to avoid that at this time.

Hi @MZeglen,

From my experience, the BG network have some peculiar behaviour, especially in input regimes that it is not designed for. First and foremost, I tend to think of the BG network in Nengo more of a soft max (i.e., it can allow multiple input values through), than a WTA network. Think of the network as amplifying the difference between the input values, rather than a strict WTA operation. If you want to get the full WTA behaviour, combine the BG network with the thalamus network with the mutual_inhibit=1 option. Note: You can increase the mutual_inhibit value to get a stronger WTA response, but doing so sometimes “locks” the output to a specific value. You’ll need to experiment with this to find the appropriate values for your network.

As for the BG itself, it works best when the input values are in the range of 0.3 to 1. Below 0.3, the neurons responsible for responding to that value don’t fire, so it’s essentially 0 at that point. Above 1, and the neurons will start saturating and can cause weird effects with the inhibition. Apart from that, if your input values are within this range, the BG should be able to perform as intended.

Can you post a snippet of your code that is exhibiting these weird behaviours? Given the complexity of the BG network, it will probably be easier to debug that network than make suggestions on how to get it working for a general problem.

Hi @xchoo,

Thank you for the insights and information. I believe I probed the signals and didn’t provide the appropriate similarities to make the data useful. As a result, the signals looked incredibly noisy and incoherent. My issues seem to be resolved at this point.

If something else comes up I’ll add the questions here. Thanks again.

Hi @xchoo. I was going back through this and had some further questions. The output still doesn’t look quite right. I’m expecting a single value to be large and the rest to be suppressed. I see a little bit of that, but the output value is scaled down by a factor of 2 or 3 in most cases. I’m not sure why that’s happening or how to adjust it. I posted a section of code that should be self-contained and easy to run.

# nengo imports
import nengo
import nengo_spa as spa

# setup the SPA vocabulary    
spa_dim = 32
vocab = spa.Vocabulary(dimensions=spa_dim, max_similarity=0.35)
vocab.populate('APPLE;BANANA;RASPBERRY;BLUEBERRY;MANGO;PINEAPPLE')
vocabList = list(vocab.keys())

model = spa.Network(label="Model", seed=8)
model.config[nengo.Ensemble].neuron_type=nengo.LIF()
with model:

    def seq_in(t):
        i = int(t)
        vl = len(vocabList)
        while t >= vl:
            i = t-vl-1
        if t < 4:
            return vocabList[i]
        else:
            return spa.semantic_pointer.Zero(spa_dim)
    
    def seq_in2(t):
        i = int(t)
        vl = len(vocabList)
        while t >= vl:
            i = t-vl-1
        if t < 5:
            return vocabList[vl-1-i]
        else:
            return spa.semantic_pointer.Zero(spa_dim)

    def stepped_in(t):
        for i, x in enumerate(vocabList):
            if t < (1+i)*0.02:
                return x
            else:
                return x
                # return spa.semantic_pointer.Zero(spa_dim)

        return spa.semantic_pointer.Zero(spa_dim)

    # Inputs
    input0 = spa.Transcode(seq_in,output_vocab=vocab, label="Input 0")
    input1 = spa.Transcode(seq_in2,output_vocab=vocab, label="Input 1")
    input2 = spa.Transcode(stepped_in,output_vocab=vocab, label="Input 2")

    in_state = spa.State(vocab, 32)
    bg = spa.modules.BasalGanglia(action_count=spa_dim, label="Basal")
    th = spa.modules.Thalamus(action_count=spa_dim, label="Thalamus")
    result = spa.State(vocab, 32)

    # Connections
    input0 >> in_state
    input1 >> in_state
    input2 >> in_state

    nengo.Connection(in_state.output,bg.input,synapse=0.01)
    nengo.Connection(bg.output,th.input,synapse=0.01)
    nengo.Connection(th.output,result.input, synapse=0.1)

Could you provide some context behind what you are trying to achieve with this network? Are you trying to get the network to pick out a specific semantic pointer? Or are you trying to get it to cycle between them? Or…?

From what I can see of the model, I don’t think it is currently set up to anything of the sort. I’ll see if I can explain why. The BG network is created is a defined action_count number of inputs. To the BG network, these inputs are treated as scalar values, and what the BG network is attempting to do is to pick the “winner” (largest value) of those scalar values.

In your network, however, you have specified the number of inputs to the BG as the dimensionality of your semantic pointers. Further more, you have connected the output of a spa.State to the input of the BG. So, in effect, the BG is going to try and output the “winner” element from the vector representation of the semantic pointer. As an example, lets say the semantic pointer is:

[-0.70701952  0.16752435 -0.12744434  0.19309422  0.48765443 -0.17715978
 -0.37804372  0.08013782]

Each of the vector elements would be treated as a scalar input to the BG network, and it would try to find the “maximum” across all of the elements (which in this case would be 0.48), and I do not think this is your intended use case for the BG network.

I was hoping for a winner-takes-all (WTA) circuit for the SPA symbols. If the inputs were APPLE (value of 0.5), BANANA (0.6), and MANGO (1.1), then MANGO would be a high output and the rest suppressed. I definitely don’t want winner elements happening as you have suggested to be the case.

I looked into the action_count variable. If I changed it to be anything different from spa_dim, it runs into an error stating: **ValidationError** : init: Shape of initial value () does not match expected shape (1, 32)

You mention the connection is from the spa.State output to the input of the BG from nengo.Connection(in_state.output,bg.input,synapse=0.01), but I this appears to be the only way to create the connection. I’m not able to do this using the >> connection. How should they connect? Can you help me better understand why the state.output, bg.input is incorrect?

I know there is another network-based Basal Ganglia, which I would have expected to work from the vector representation. It almost feels like the SPA module version is doing the same thing.

Maybe all of this confusion helps display my need for clarification. I was hoping there were more examples related to the module of this in the examples and documentation, but they seem to be related to the other BG or action selection, which is handled differently. Action selection is an option that I have considered. I was hoping to avoid it.

Oic! I think in that case, you’ll want to be using an associative memory (or cleanup memory) network. There’s an example of how to create and use one here. If you really want to use the BG network to perform the same task, it is possible, and I can elaborate on it further (in a future post) if you want to know how it’s done.

That’s correct, if you create an instance of the BG network yourself, you’ll need to connect to it using the nengo.Connections. In such a case, we typically use the transform attribute on these connections to “convert” vector inputs into the scalar inputs that the BG expects. As an example, this (pseudo) code below converts an input into a similarity measure of the “BANANA” semantic pointer:

nengo.Connection(in_state.output, bg.input[0], transform=vocab.parse["BANANA"].v, synapse=0.01)

Note: I haven’t actually tested the code, you may need to transpose the “BANANA” vector for it to compile properly (using vocab.parse["BANANA"].v.T).
What the code above does is to provide the similarity of the in_state output to the semantic pointer “BANANA” to the 1st “action” of the BG network.

For your example with the 6 semantic pointers you want to do the WTA with, you’ll need to create a connection for each of the semantic pointers, each going to a different “action” of the BG network. The BG network itself will need to be created with 6 actions too (one for each semantic pointer).

The spa.modules.BasalGanglia is just a wrapper around the nengo.networks.BasalGanglia network, with the added SPA attributes that make it possible to work with the SPA framework. However, those SPA attributes are designed to work with the spa.ActionSelection() object, and not if you use the BG directly in your SPA network.

The action selection mechanism used with the BG network serves to simplify how a user works with the BG network. Rather than requiring the user to create:

  • The connections and transforms to the BG input
  • The connections and transform from the Thalamus output
  • The appropriate weights for the transforms to implement the desired actions

the user can use the action selection object to create all of this stuff internally (automatically) and not have to worry about it.

1 Like