# Connecting ensembles using neuron clusters(second question added)

Hi,

For a school project of mine i want to introduce a distribution of synaptic values to a connection between ensembles. ens2 activity should be the same as ens1 activity if no synaptic variance is used (as in the code below). The first method (neuron to neurons) i tried is by clustering the neurons from both the pre and post ensemble and then to fully connect all pre clusters to all post clusters. The second method (neuron to ensemble) uses a connection from neuron cluster from the pre ensemble to the entire post ensemble. Option 1 for both methods (full connect, no clustering) gives the result i expect; ens2 activity looks very similar, if not the same, as ens1 activity. However when i try the clustering, for both methods, without any synaptic variance (yet). i find that ens2 shows either no or very minor activity. Does any one of you have any experience with connecting clusters like this? i can get some activity by scalng up the weight in the clustered connection and then i find that how larger the number of clusters the poorer the connection gets.

simplified code:

``````import nengo
import numpy as np

## neuron to neuron

model=nengo.Network()
with model:

ens1 = nengo.Ensemble(1000, 24, encoders=enc, seed=1)
ens2 = nengo.Ensemble(1000, 24, seed=2)

# Make temporary connection and use it to retrieve weights, then delete the Connection.

conn = nengo.Connection(ens1 , ens2, solver=nengo.solvers.LstsqL2(weights=True), seed=3)

with nengo.simulator.Simulator(model) as sim:
weights = sim.data[conn].weights / sim.data[ens2].gain[:, None]

model.connections.remove(conn)

# Divide ens1 and ens2 ensemble into clusters

Ncluster=50 # number of neurons per cluster
clusters=np.arange(0, 1000, Ncluster) #result: (0, 50 ,100, ..., 900, 950)

### OPTION 1: fully connect en1 to ens2 using the found weights

nengo.Connection(ens1.neurons, ens2.neurons, transform=weights)

### OPTION 2: Fully connect all neuron clusters using found weights

for i in range(clusters.size):

begin1=clusters[i]
end1=(begin1+Ncluster)

for j in range(clusters.size):

begin2=clusters[j]
end2=(begin2+Ncluster)

nengo.Connection(ens1.neurons[begin1:end1],ens2.neurons[begin2:end2],
transform = weights[begin2:end2,begin1:end1])

### neuron to ensemble

model=nengo.Network()
with model:

ens1 = nengo.Ensemble(1000, 24, encoders=enc, seed=1)
ens2 = nengo.Ensemble(1000, 24, seed=2)

# Make temporary connection and use it to retrieve weights, then delete the Connection.

conn = nengo.Connection(ens1 , ens2, function= lambda x: x, seed=3)

with nengo.simulator.Simulator(model) as sim:
weights = sim.data[conn].weights

model.connections.remove(conn)

# Divide ens1 ensemble into clusters

Ncluster=50 # number of neurons per cluster
clusters=np.arange(0, 1000, Ncluster) #result: (0, 50 ,100, ..., 900, 950)

### OPTION 1: fully connect en1 neurons to ens2 using the found weights

nengo.Connection(ens1.neurons, ens2, transform=weights)

### OPTION 2: Fully connect all ens1 neuron clusters to ens2 using found weights

for i in range(clusters.size):

begin=clusters[i]
end=(begin+Ncluster)

nengo.Connection(ens1.neurons[begin:end], ens2, transform=weights[:,begin:end])``````

Hi @ChielWijs. Welcome to the forum!

I think the main difference between the neuron->neuron method and the neuron->ensemble method is whether youâ€™re able to connect to clusters in the destination population or not. In principle, I think either should be able to work.

Looking at your code for the neuron->neuron case, it seems all right. I made my own version of it, with only a few minor changes to get it to run. I added an input, and made the ensembles 2-dimensional instead of 24 dimensional so itâ€™s easier to see their output. I used different numbers of neurons for ensemble 1 and ensemble 2, just because that helps me to debug and make sure e.g. I donâ€™t have a transform matrix accidentally transposed, or something like that. (Iâ€™ve used fewer neurons here, but I also tried with 1000 neurons in each ensemble, and it still works.) I also commented out the â€śOPTION 1â€ť connection, because you obviously donâ€™t want to be doing both options simultaneously.

When I run this code, it works as expected. So the problem must be that when you implement this in your larger model, something is going wrong so that itâ€™s not working correctly. But there doesnâ€™t seem to be anything fundamentally wrong with what youâ€™re doing.

``````import matplotlib.pyplot as plt
import nengo
import numpy as np

## neuron to neuron

model = nengo.Network()
with model:

inp = nengo.Node(lambda t: [np.sin(t), np.cos(t)])

ens1 = nengo.Ensemble(200, 2, seed=1)
ens2 = nengo.Ensemble(250, 2, seed=2)
ens2_p = nengo.Probe(ens2, synapse=0.03)

nengo.Connection(inp, ens1, synapse=None)

# Make temporary connection and use it to retrieve weights, then delete the Connection.
conn = nengo.Connection(
ens1, ens2, solver=nengo.solvers.LstsqL2(weights=True), seed=3
)

with nengo.Simulator(model) as sim:
weights = sim.data[conn].weights / sim.data[ens2].gain[:, None]

model.connections.remove(conn)

# Divide ens1 and ens2 ensemble into clusters
Ncluster = 50  # number of neurons per cluster
clusters1 = np.arange(0, ens1.n_neurons, Ncluster)
clusters2 = np.arange(0, ens2.n_neurons, Ncluster)

### OPTION 1: fully connect en1 to ens2 using the found weights
# nengo.Connection(ens1.neurons, ens2.neurons, transform=weights)

### OPTION 2: Fully connect all neuron clusters using found weights
for begin1 in clusters1:
end1 = begin1 + Ncluster

for begin2 in clusters2:
end2 = begin2 + Ncluster

nengo.Connection(
ens1.neurons[begin1:end1],
ens2.neurons[begin2:end2],
transform=weights[begin2:end2, begin1:end1],
)

with nengo.Simulator(model) as sim:
sim.run(3.0)

plt.plot(sim.trange(), sim.data[ens2_p])
plt.show()
``````

Output plot:

1 Like

Hi Eric,

Thank you for your response. It must then be that the problem lies somewhere else. Looking at the values in the GUI I also observe correct behaviour from the connection. The weird results i see in plots made later on by the program, I will take a look at that part of the code

Good day,

Chiel

Hi Eric,

The program does indeed work as it should when using nengo.Simulator to run the model. In the real program the model is run by a Simulator that is a child of negno_ocl.Simulator and using this simulator the program does not work the same as with the nengo.Simulator.

From trying out some plots it seems like only the first connection made in the for loop is connected. I based this on trying this for both 4 and 10 clusters while keeping the rest of the parameters the same and then the one with 10 clusters (so less neurons per cluster) showed lower activity. This activity is measured in a ensemble that is connected from ens2, so not ens2 itself. I then wrote out the for-loop into separate nengo.Connections to see if that might be the cause but this was not the case, as could be expected.

Would you have any idea why using the nengo_ocl.Simulator (that is for the model as a whole, not the simulation used for finding the weights, this is still the regular nengo.Simulator) causes the model not to work properly? I have been reading through a lot of the documentation from nengo_ocl but could not find anything that could be the cause of this problem. Perhaps nengo_ocl does not work with neuron slices the same way as nengo does?

Thank you,

Chiel

Hi Chiel,

What hardware are you running on? If you need support for an NVIDIA GPU, would it be possible for you to use NengoDL rather than NengoOCL? NengoOCL has not been updated in quite a while.

From what I can tell, your problem is a bug in NengoOCL. We do plan to do a new release of NengoOCL at some point in the future, but currently thatâ€™s still a ways away. Iâ€™ve made an issue here: https://github.com/nengo/nengo-ocl/issues/167

hi Eric,

Thank you for your response, I will try to use nengo DL.