# Scaling up simularity

#4

I’m going to try and explain what you said back to you to make sure I understand.

There are three cortical sections that seem important. They are the Core, Lateral Belt and Parabelt. You want to model the Lateral Belt as a Novelty Detector of data coming from the Core. You’ve defined Novelty as being the similarity between the Core and the Parabelt, which you’ve modeled as the product between two signals.

What behaviour do you want to initiate once you have this similarity measure? Alternatively, is it the ability to calculate the similarity that you are currently stuck on?

Looking at your code, there’s a few problems and a few quality of life improvements that could be made. I’ve posted your code below for ease of reference:

``````import numpy as np
import matplotlib.pyplot as plt
import nengo
from nengo.processes import Piecewise

model = nengo.Network()

with model:

stim_a = nengo.Node(7.5)
stim_b = nengo.Node(Piecewise({0: .5, .1: 7.5, .2: .2}))

# set up the input neurons

# set up the alternate path
combined.encoders = nengo.dists.Choice([[1, 1], [-1, 1], [1, -1], [-1, -1]])

nengo.Connection(stim_a, a)
nengo.Connection(stim_b, b)
nengo.Connection(stim_a, c)

# connect the alternate path
nengo.Connection(a, combined[0])
nengo.Connection(b, combined[1])

# connect the input neurons to the 2D neuron
#  nengo.Connection(c, d[0])
nengo.Connection(d, e, transform=2.5)

# connect the Input Node to the alternate pathway
def product(x):
return x[0] * x[1]

# define the product function
nengo.Connection(combined, prod, transform=-2.5, function=product)
# connect up the product transform
threshold = -0.8

def thresh(x):
return x[0] - threshold

# nengo.Connection(prod, d[1])

nengo.Connection(prod, d, transform=1.5, function=thresh)

nengo.Connection(c, e, transform=-.14)  # , function=additn)
# connect up the addition transform
product_probe = nengo.Probe(prod, synapse=0.01)
d_probe = nengo.Probe(d, synapse=0.01)
e_probe = nengo.Probe(e, synapse=0.01)

# set up the probes to gather data
with nengo.Simulator(model) as sim:
sim.run(0.5)
# set up the simulator run
np.savetxt("prod_probe.txt", sim.data[product_probe])
np.savetxt("d_probe.txt", sim.data[d_probe])
np.savetxt("e_probe.txt", sim.data[e_probe])

# save the probes as txt type data files
plt.figure()
plt.plot(sim.trange(), sim.data[product_probe], label='product')
plt.plot(sim.trange(), sim.data[d_probe], label='d output')
plt.plot(sim.trange(), sim.data[e_probe], label='e output')
plt.legend()

plt.show()
``````

What exactly are the inputs and outputs that you’re expecting from this network and how are they differing from what you’re currently getting?

Even without totally understanding what’s happening in the code, there’s a few quirks worth noting. In the code, you’re using a custom product ensemble, but may I suggest using `nengo.networks.Product` instead? Additionally, you seem to want a to have a similarity threshold for some decision to be made. Your current implementation won’t work as you expect and I recommend instead using the configuration shown in this example. Finally, your inputs `7.5` and `7.5` greatly exceed the radius of the ensemble they are feeding into and might be the source of some of your confusion. These inputs are going to saturate the neural ensembles and are instead going to represent something around `1.2` instead of `7.5`.

# Bonus tips

One last thing. I’m a bit surprised you’re doing manual plotting. Have you tried using the Nengo GUI for exploring your model?

#5

I must admit that I don’t understand everything about the models I am
trying to use, I just barely managed to get the graphics running on this
machine, the version that I have is without the NEF front-end menu to
simulator for me, something about having two different browsers both
trying to interface with it, the default is evolution on my machine and
it took me a while to find a copy of firefox that would load. When I
load the gui, It automatically tries to load evolution, and I have to
interfere with that by loading firefox right afterwords or it gives me a
dummy simulator and won’t compile my code, it is much more reasonable to
do my work with python which works every time.

Graeme

#6

Ok, I have had time to review and experiment with your advice, for instance the problem with nengo.networks.Product() is that it demands a single vector when I want to compare vectors. I did away with the nengo.dists.Choice() line with no noticeable negative effects which goes a long way towards making the algorithm scalable, and I have already scaled it up to three dimensions, and tuned it with smaller transforms etc.

On the inputs, once I had them in range, the results became much tighter to tune. But it still requires custom tuning each time I scale the system larger. On the outputs: I want a negative value that is thresholdable, for gating purposes on d, and a fairly representative version of a after gating. I could probably use some help on implementing threshold if only because I was guessing how to implement it and this is the best mechanism I could find. None of the core mechanisms allow a threshold variable to be set, and I am not yet familiar with all the other functions, and I can still not load the library.

Actually I noted an error in your assumption of what I was trying to do, I am not comparing the core and parabelt, I am comparing core values, and gating them in the parabelt

#7

Rats, I made a mistake in the last posting, I want e to be a mirror image of A (with a negative value)

#8

I have to admit, that I missed your threshold example. However when I tried it, I found it wasn’t tuned for Negative Logic it assumes that the high value is the one I want to threshold for not the negative value. In a tonic inhibition mechanism you want to threshold for the negative value that is largest, not the values that are above the threshold.

Graeme

#9

FYI, you can edit your posts in this forum by clicking the three dots beside “reply” and then clicking the pencil icon.

#10

I’m still not clear on what you’re trying to achieve. Would it be possible to draw a block diagram showing the various inputs and outputs? Alternatively, would you want to hop onto a video call to discuss this more quickly?

#11

Ok, I will try to send you a graphic I made up that describes the basic network. I’d go on a video call, if I could, but my system is very buggy and lags excessively. What I want to do first is expand the input range to at least 30 inputs and still manage to compare the vectors for each input using something like a dot product that will detect similarity. This then will go through an additive inversion to change the similarity detector into a Novelty detector, then I want to use the output from that calculation to gate a copy of the A input so that detection of novelty releases a copy of the A input. It’s a simple circuit just complicated by scaling up to 30 inputs and a problem I am now having trying to get the gating mechanism to propagate. I am embarrassed that it is so simple.

#12

Seanny123 https://forum.nengo.ai/u/seanny123 Reviewer
May 14

I’m still not clear on what you’re trying to achieve. Would it be
possible to draw a block diagram showing the various inputs and
outputs? Alternatively, would you want to hop onto a video call to
discuss this more quickly?

#13

Thanks for drawing the diagram. It really helped me understand what you’re working towards!

What do you mean by `inputs` ? From what I can tell there’s two inputs in your current network. Are input A and input B gating each other in the full network? If you added an input C, would it be added to the combination of A and B? Would it just be gated by the product of A and B?

#14

Actually no, it adds it’s own product so I am comparing multiple entries at once I am uploading a new version of dot_prod.py with 3 inputs to illustrate (I have commented out the hand plotting lines so that it will load in the nengo_gui version of the program. I am not sure I have the best version of this, but it outputs a mirror image of A once it is in the right sector of the graph. I will also output the latest version of figure1 which tracks the outputs of the neurons output of the whole circuit is in E[dot_product scaled up to 3 inputs

#15

#16

#17

As you will notice, the drawing is of the old non-scaled version of the program scaling it up to three inputs still works, but I have yet to make six inputs work. I am also trying to scale it up to six outputs but the simulation takes time between updates.

#18

I think I have been simulating simularity wrongly, I was confused because I knew that there were only two laminae in the belt cortex that I absolutely needed to implement. Nengo doesn’t separate out laminae so I assumed that I needed to implement it in two ensembles, but now I think I need to implement it in three different types of ensembles. The spare type of ensemble is needed because there are two functions of the combined ensemble that don’t fit together in a single ensemble as far as I know. It’s a limitation of the model, and natural neurons can probably mix functions easier. Instead of trying to make it all into a single simularity detector, I have to string multiple simularity detectors of only 2 dimensions together using an additive function.

#19

Ok, I heard about cosine similarity, and tried it because the dot product was too quick to get out of range, It seems better and the transforms do not need to be as large. I can’t do it now, but I will update the three input version in a bit.

#20

dot_3.py (3.4 KB)

#21

Although I’m still not totally sure what you’re going for here and still think a video call would be a good idea, looking at your code, I would like to note this is how you make a cosine similarity network in Nengo:

``````prod = nengo.Product()
dot_output = nengo.Node(size_in=1)
nengo.Connection(self.product.output, self.output, transform=np.ones((1, dimensions)))
``````

#22

It’s anything but self-explanatory, where did self.product.output and self.output come from? and what does np.ones((1, dimensions)) do Plus what the heck are you doing dimensioning a connection, Obviously I haven’t read enough of the documentation, can you point out what parts of the documentation covers this stuff?

Is this all supposed to work in a single def statement?

Colour me Confused

#23

I thought I would take a moment and explain the theory behind what I am trying to do.

Essentially it all has to do with information entropy

As we learn we integrate information, as we integrate information it becomes more entropic.

The Maximum Entropy Principle states that the most entropy is in the best learned data. In other words Maximum Entropy exists where the information integration is most completed.

mEPLA, my theory takes this into account, and turns it around to find new opportunities for learning. Essentially what it says is that the minimum entropy is found in areas where information integration is less complete, By extension this means that by allocating learning resources to areas of low entropy we can find opportunities to learn and better integrate information.

simularity is an approximation of entropy, the more a thing is simular the more integrated it’s information is, and the less learning that can be achieved.

What I think I am doing, is proving that similarity detection is a cheap way to detect learning opportunities. By inverting the logic, I am converting what I think is a natural similarity detector into a novelty detector which can detect learning opportunities and by applying learning resources to them, increase the information integration of the whole brain. I am just doing it almost as soon as the information gets to the cortex.

Theoretically novelty is an indication of a learning opportunity.

But to monitor entropy I need to operate in bulk, comparing the entropy of each input with all the others around it. I can then relatively easily gate the lowest entropy locations so that the lowest entropy triggers the gating of the outputs this is probably an early example of salience.