Nengo implementation of Parisien transform, from mixed sign weights to separate excitatory-inhibitory populations

Hi Nengoites,

I have a question about the Parisien transform for converting networks with mixed sign weights into networks with separate groups of excitatory and inhibitory neurons.
My question specifically is about how it is implemented in Nengo.
I’ve been working through these Jupyter notebooks from Terry Stewart:

My understanding is that, at a high level, the transform consists of two steps: (1) adding a weight $\Delta w$ to all the weights between two groups of neurons such that it makes all the weights greater than or equal to zero, but adds a bias and then (2) creating a set of inhibitory neurons that represent the bias as a function $\mathbf{f^{bias}(x)}$, and thus removes it from the population receiving input from the neurons we made excitatory by adding the bias.

The function in the notebook seems to do the job, because I can run

parisien_transform(conn, model, inh_synapse=conn.synapse)

and confirm the output is unchanged, i.e. the following produces the same graph as running the simulation before running the model through the Parisien transform function

sim_after = nengo.Simulator(model)
sim_after.run(2)
pylab.plot(sim_after.trange(), sim_after.data[p_stim])
pylab.plot(sim_after.trange(), sim_after.data[p])
pylab.show()

But when I inspect the connections (I added labels),

model.connections

[<Connection at 0x27e5422d3c8 from <Node (unlabeled) at 0x27e5422d160> to <Ensemble (unlabeled) at 0x27e5422d2e8>>,
 <Connection at 0x27e530df390 original>,
 <Connection at 0x27e54239048 bias_w>,
 <Connection at 0x27e542398d0 pre_to_inhib>,
 <Connection at 0x27e5600b2e8 inhib_to_post>]

I find that the original connection still has weights that are negative

enc = sim_after.data[model.connections[1].post_obj].encoders
dec = sim_after.data[model.connections[1]].weights
transform = nengo.utils.builder.full_transform(model.connections[1])
w = np.dot(enc, np.dot(transform, dec))
print(w)
[[ 1.12097336e-04 -1.40153074e-04  1.74687565e-06 ...  1.00287070e-04
   2.23849466e-05 -9.05548508e-06]
 [-3.65261986e-05  1.19570063e-04 -7.43745958e-05 ... -1.00219160e-04
   2.54160605e-05 -2.83091066e-05]
...

Naively I would have implemented the transform by adding the $\Delta w$ to the original weights to get them all positive, but it seems that’s not what’s happening here. (I guess under the Neural Engineering Framework this would require changing the encoder of the presynaptic population and the decoder of the postsynaptic population?)

Can someone explain to me what I’m not understanding about the implementation? Should the weights of the original connection all be positive after the transform or not? Am I just not computing them correctly or something?

Any insight would be much appreciated. Thanks for your time.
–David

I’ll take a stab at answering my own question; someone please tell me if there’s something I’m getting wrong.

The weights between the two ensembles do change.
For example, as the function in the notebook I linked to prints out, after computing the transform:

print("new weights minimum: {}".format(np.min(w + bias_w)))
print("new weight maximum: {}".format(np.max(w + bias_w)))

new weights minimum: -2.710505431213761e-20
new weight maximum: 0.0004044598682324777

But the encoder and decoder in the original connection from which w was computed do not change.
Instead a new connection is made directly between the neurons in the two populations A and B, with the bias weights bias_w.

At run time, the simulator used the encoders and decoders from both the original connection and the bias connection to compute the current in population B.

So even though the encoder and decoder in the original connection do not explicitly change, and hence the weights don’t change, effectively the weights between the two populations of neurons change because they are computed using the original encoder and decoder, $ e_j d_i $ as well as the new encoder and decoder in the “bias” connection, $e_j^b d_i^b$.

I guess the reason this confused me is because I am used to an approach where the weights between neurons are explicitly defined.

So this means I can’t rewrite as a single connection, because the encoders and decoders are determined by the functions they’re representing. Is that correct?

I cannot help answering your question but I am interested in your work. I am currently looking to find a way to Implement STPD in a randomly connected ensemble of neurons and therefore I would need to identify the synaptic connections in time. Do you have any suggestion/feedback based on your work with nengo? I would really appreciate it.

Not quite sure what you’re asking, but:
did you see this post?

1 Like