Hi Nengoites,
I have a question about the Parisien transform for converting networks with mixed sign weights into networks with separate groups of excitatory and inhibitory neurons.
My question specifically is about how it is implemented in Nengo.
I’ve been working through these Jupyter notebooks from Terry Stewart:
My understanding is that, at a high level, the transform consists of two steps: (1) adding a weight $\Delta w$ to all the weights between two groups of neurons such that it makes all the weights greater than or equal to zero, but adds a bias and then (2) creating a set of inhibitory neurons that represent the bias as a function $\mathbf{f^{bias}(x)}$, and thus removes it from the population receiving input from the neurons we made excitatory by adding the bias.
The function in the notebook seems to do the job, because I can run
parisien_transform(conn, model, inh_synapse=conn.synapse)
and confirm the output is unchanged, i.e. the following produces the same graph as running the simulation before running the model through the Parisien transform function
sim_after = nengo.Simulator(model)
sim_after.run(2)
pylab.plot(sim_after.trange(), sim_after.data[p_stim])
pylab.plot(sim_after.trange(), sim_after.data[p])
pylab.show()
But when I inspect the connections (I added labels),
model.connections
[<Connection at 0x27e5422d3c8 from <Node (unlabeled) at 0x27e5422d160> to <Ensemble (unlabeled) at 0x27e5422d2e8>>,
<Connection at 0x27e530df390 original>,
<Connection at 0x27e54239048 bias_w>,
<Connection at 0x27e542398d0 pre_to_inhib>,
<Connection at 0x27e5600b2e8 inhib_to_post>]
I find that the original connection still has weights that are negative
enc = sim_after.data[model.connections[1].post_obj].encoders
dec = sim_after.data[model.connections[1]].weights
transform = nengo.utils.builder.full_transform(model.connections[1])
w = np.dot(enc, np.dot(transform, dec))
print(w)
[[ 1.12097336e-04 -1.40153074e-04 1.74687565e-06 ... 1.00287070e-04
2.23849466e-05 -9.05548508e-06]
[-3.65261986e-05 1.19570063e-04 -7.43745958e-05 ... -1.00219160e-04
2.54160605e-05 -2.83091066e-05]
...
Naively I would have implemented the transform by adding the $\Delta w$ to the original weights to get them all positive, but it seems that’s not what’s happening here. (I guess under the Neural Engineering Framework this would require changing the encoder of the presynaptic population and the decoder of the postsynaptic population?)
Can someone explain to me what I’m not understanding about the implementation? Should the weights of the original connection all be positive after the transform or not? Am I just not computing them correctly or something?
Any insight would be much appreciated. Thanks for your time.
–David