Accessing dopamine-level parameters "le" and "lg" in BasalGanglia

Dear all
in the 2015 Nengo-SPA version for action selection we were able to access specific parameters of the basal ganglia:

old model:

actions = spa.Actions(
    'dot(visual, NEUTRAL) --> somato_expect=NEUTRAL, motor=NEUTRAL',
    'dot(visual, START) --> position=P1',
    'dot(position, P1) --> motor=sequence*~P1, somato_expect=sequence*~P1, position=P1*2, compare_A = somato, compare_B = somato_expect',
    'dot(compare, YES) + dot(position, P1) - 1 --> position=(P2-P1)*2',   
    'dot(position, P2) --> motor=sequence*~P2, somato_expect=sequence*~P2, position=P2*2, compare_A = somato, compare_B = somato_expect',
    'dot(compare, YES) + dot(position, P2) - 1 --> position=(P3-P2)*2',   
    'dot(position, P3) --> motor=sequence*~P3, somato_expect=sequence*~P3, position=P3*2, ...

model.bg = spa.BasalGanglia(actions, weights=dict(lg=0.04, le=0.04, wt=0.2, wp_gpi=0.9), split_GPi_SNr=True)
model.thalamus = spa.Thalamus(model.bg)

It would be great if I could access again the Basal ganglia parameters “le”, “lg”, “wt”, “wp_gpi”, and “split_gpi_snr” as given in the commands above in the current nengo_spa version.

the old model was coded as:

import nengo
from nengo import spa

model = spa.SPA()
with model:

The meaning of parameters “le” and “lg” is documented here:
"One parameter (lg) operates on D1 receptors and the other (le) on D2 receptors, both of which are located in the striatum. Within our network architecture, lg (D1) mainly influences the selection pathway, while le (D2) mainly influences the control pathway. "
cited from:
Senft V, Stewart TC, Bekolay T, Eliasmith C, Kröger BJ (2016) Reduction of dopamine in basal ganglia and its effects on syllable sequencing in speech: A computer simulation study. Basal Ganglia 6: 7-17 (doi, pdf)

The remaining parameters “wp_gpi”, “wt” and “split_gpi_snr” are documented here:
" The wt parameter adjusts the strength of afferents to the STN. In the original model, GPi and SNr are combined, as they receive the same inputs and produce the same outputs. To adjust the activity of the SNr and GPi independently, we modified the original model by splitting the GPi and SNr into separate components. This split also required adding a wp parameter that influences the activity of the GPi specifically. The modified structure of the BG model and the parameters influencing specific modules of the BG are displayed in Figure 1."
cited from:
Senft V, Stewart TC, Bekolay T, Eliasmith C, Kröger BJ (2018) Inhibiting Basal Ganglia Regions Reduces Syllable Sequencing Errors in Parkinson’s Disease: A Computer Simulation Study. Frontiers in Computational Neuroscience 12:41 (doi)

It is sufficient for me, if I get access to the le and lg parameters. That would allow me, to check simulations of my syllable production model - developed with help of Xuan over the last weeks - in case of normal and lower dopamine levels. (dysfunctions of the action selection process).

Kind regards
Bernd

You find these parameters as the Weight class in the nengo_spa.modules.basalganglia module. Note that it is currently only possible to change these globally. Let me know if you need to create multiple BasalGanglia instances within one model and be able to make isolated changes to the parameters. Then I could look into implementing support for that.

1 Like

To clarify @jgosmann’s comment, you can change the parameters for the (global) BG network by modifying the values of the Weight class, like so:

spa.modules.basalganglia.Weights.le = ...

You can put this anywhere in your code, as long as it’s before the creation of the spa.ActionSelection object. Here’s some example code that modifies the global ws (input weight) to the BG network. You can change the ws value to see the effect it has on the BG behaviour:

import matplotlib.pyplot as plt
import nengo
import nengo_spa as spa

D = 16


def input_func(t):
    if t < 3:
        return "A"
    else:
        return "B"


with spa.Network() as model:
    vinput = spa.Transcode(input_func, output_vocab=D)
    vision = spa.State(D)

    spa.modules.basalganglia.Weights.ws = 0.5
    with spa.ActionSelection() as action_sel:
        spa.ifmax(spa.dot(vinput, spa.sym.A), spa.sym.C >> vision)
        spa.ifmax(spa.dot(vinput, spa.sym.B), spa.sym.D >> vision)

    p_vin = nengo.Probe(vinput.output)
    p_vis = nengo.Probe(vision.output, synapse=0.01)


with nengo.Simulator(model) as sim:
    sim.run(5)


vocab = model.vocabs[D]
plt.figure()
plt.subplot(211)
plt.plot(sim.trange(), spa.similarity(sim.data[p_vin], vocab))
plt.legend(vocab.keys())
plt.subplot(212)
plt.plot(sim.trange(), spa.similarity(sim.data[p_vis], vocab))
plt.legend(vocab.keys())
plt.show()
1 Like

thank you for your help.

Because I use two levels of spa.ActionSelection in my model with probably different BG-parameter settings:
Is it sufficient to update the BG parameters for each Action selection level if I repeat the commend
spa.modules.basalganglia.Weights.le = ...
a second time with a different value before the second
spa.ActionSelection - code ?
Kind regards
Bernd

Hmmm. Apparently that works! :smiley:

See my modified code from the example I posted above:

import matplotlib.pyplot as plt
import nengo
import nengo_spa as spa

D = 16


def input_func(t):
    if t < 3:
        return "A"
    else:
        return "B"


with spa.Network() as model:
    vinput = spa.Transcode(input_func, output_vocab=D)
    vision = spa.State(D)
    memory = spa.State(D)

    spa.modules.basalganglia.Weights.ws = 0.5
    with spa.ActionSelection() as action_sel:
        spa.ifmax(spa.dot(vinput, spa.sym.A), spa.sym.C >> vision)
        spa.ifmax(spa.dot(vinput, spa.sym.B), spa.sym.D >> vision)

    spa.modules.basalganglia.Weights.ws = 1
    with spa.ActionSelection() as action_sel2:
        spa.ifmax(spa.dot(vinput, spa.sym.A), spa.sym.D >> memory)
        spa.ifmax(spa.dot(vinput, spa.sym.B), spa.sym.C >> memory)

    p_vin = nengo.Probe(vinput.output)
    p_vis = nengo.Probe(vision.output, synapse=0.01)
    p_mem = nengo.Probe(memory.output, synapse=0.01)


with nengo.Simulator(model) as sim:
    sim.run(5)


vocab = model.vocabs[D]
plt.figure()
plt.subplot(311)
plt.plot(sim.trange(), spa.similarity(sim.data[p_vin], vocab))
plt.legend(vocab.keys())
plt.subplot(312)
plt.plot(sim.trange(), spa.similarity(sim.data[p_vis], vocab))
plt.legend(vocab.keys())
plt.subplot(313)
plt.plot(sim.trange(), spa.similarity(sim.data[p_mem], vocab))
plt.legend(vocab.keys())
plt.show()