Scaling up simularity


I must admit that I don’t understand everything about the models I am
trying to use, I just barely managed to get the graphics running on this
machine, the version that I have is without the NEF front-end menu to
load the gui. I can probably rig it, but it is flakey on loading the
simulator for me, something about having two different browsers both
trying to interface with it, the default is evolution on my machine and
it took me a while to find a copy of firefox that would load. When I
load the gui, It automatically tries to load evolution, and I have to
interfere with that by loading firefox right afterwords or it gives me a
dummy simulator and won’t compile my code, it is much more reasonable to
do my work with python which works every time.



Ok, I have had time to review and experiment with your advice, for instance the problem with nengo.networks.Product() is that it demands a single vector when I want to compare vectors. I did away with the nengo.dists.Choice() line with no noticeable negative effects which goes a long way towards making the algorithm scalable, and I have already scaled it up to three dimensions, and tuned it with smaller transforms etc.

On the inputs, once I had them in range, the results became much tighter to tune. But it still requires custom tuning each time I scale the system larger. On the outputs: I want a negative value that is thresholdable, for gating purposes on d, and a fairly representative version of a after gating. I could probably use some help on implementing threshold if only because I was guessing how to implement it and this is the best mechanism I could find. None of the core mechanisms allow a threshold variable to be set, and I am not yet familiar with all the other functions, and I can still not load the library.

Actually I noted an error in your assumption of what I was trying to do, I am not comparing the core and parabelt, I am comparing core values, and gating them in the parabelt


Rats, I made a mistake in the last posting, I want e to be a mirror image of A (with a negative value)


I have to admit, that I missed your threshold example. However when I tried it, I found it wasn’t tuned for Negative Logic it assumes that the high value is the one I want to threshold for not the negative value. In a tonic inhibition mechanism you want to threshold for the negative value that is largest, not the values that are above the threshold.



FYI, you can edit your posts in this forum by clicking the three dots beside “reply” and then clicking the pencil icon.


I’m still not clear on what you’re trying to achieve. Would it be possible to draw a block diagram showing the various inputs and outputs? Alternatively, would you want to hop onto a video call to discuss this more quickly?


Ok, I will try to send you a graphic I made up that describes the basic network. I’d go on a video call, if I could, but my system is very buggy and lags excessively. What I want to do first is expand the input range to at least 30 inputs and still manage to compare the vectors for each input using something like a dot product that will detect similarity. This then will go through an additive inversion to change the similarity detector into a Novelty detector, then I want to use the output from that calculation to gate a copy of the A input so that detection of novelty releases a copy of the A input. It’s a simple circuit just complicated by scaling up to 30 inputs and a problem I am now having trying to get the gating mechanism to propagate. I am embarrassed that it is so simple.


Seanny123 Reviewer
May 14

I’m still not clear on what you’re trying to achieve. Would it be
possible to draw a block diagram showing the various inputs and
outputs? Alternatively, would you want to hop onto a video call to
discuss this more quickly?


Thanks for drawing the diagram. It really helped me understand what you’re working towards!

What do you mean by inputs ? From what I can tell there’s two inputs in your current network. Are input A and input B gating each other in the full network? If you added an input C, would it be added to the combination of A and B? Would it just be gated by the product of A and B?


Actually no, it adds it’s own product so I am comparing multiple entries at once I am uploading a new version of with 3 inputs to illustrate (I have commented out the hand plotting lines so that it will load in the nengo_gui version of the program. I am not sure I have the best version of this, but it outputs a mirror image of A once it is in the right sector of the graph. I will also output the latest version of figure1 which tracks the outputs of the neurons output of the whole circuit is in E[dot_product scaled up to 3 inputs

#15 (2.5 KB)


The diagram appears to have disappeared. Would you mind re-uploading it?


As you will notice, the drawing is of the old non-scaled version of the program scaling it up to three inputs still works, but I have yet to make six inputs work. I am also trying to scale it up to six outputs but the simulation takes time between updates.


I think I have been simulating simularity wrongly, I was confused because I knew that there were only two laminae in the belt cortex that I absolutely needed to implement. Nengo doesn’t separate out laminae so I assumed that I needed to implement it in two ensembles, but now I think I need to implement it in three different types of ensembles. The spare type of ensemble is needed because there are two functions of the combined ensemble that don’t fit together in a single ensemble as far as I know. It’s a limitation of the model, and natural neurons can probably mix functions easier. Instead of trying to make it all into a single simularity detector, I have to string multiple simularity detectors of only 2 dimensions together using an additive function.


Ok, I heard about cosine similarity, and tried it because the dot product was too quick to get out of range, It seems better and the transforms do not need to be as large. I can’t do it now, but I will update the three input version in a bit.

#20 (3.4 KB)


Although I’m still not totally sure what you’re going for here and still think a video call would be a good idea, looking at your code, I would like to note this is how you make a cosine similarity network in Nengo:

prod = nengo.Product()
dot_output = nengo.Node(size_in=1)
nengo.Connection(self.product.output, self.output, transform=np.ones((1, dimensions)))


It’s anything but self-explanatory, where did self.product.output and self.output come from? and what does np.ones((1, dimensions)) do Plus what the heck are you doing dimensioning a connection, Obviously I haven’t read enough of the documentation, can you point out what parts of the documentation covers this stuff?

Is this all supposed to work in a single def statement?

Colour me Confused


I thought I would take a moment and explain the theory behind what I am trying to do.

Essentially it all has to do with information entropy

As we learn we integrate information, as we integrate information it becomes more entropic.

The Maximum Entropy Principle states that the most entropy is in the best learned data. In other words Maximum Entropy exists where the information integration is most completed.

mEPLA, my theory takes this into account, and turns it around to find new opportunities for learning. Essentially what it says is that the minimum entropy is found in areas where information integration is less complete, By extension this means that by allocating learning resources to areas of low entropy we can find opportunities to learn and better integrate information.

simularity is an approximation of entropy, the more a thing is simular the more integrated it’s information is, and the less learning that can be achieved.

What I think I am doing, is proving that similarity detection is a cheap way to detect learning opportunities. By inverting the logic, I am converting what I think is a natural similarity detector into a novelty detector which can detect learning opportunities and by applying learning resources to them, increase the information integration of the whole brain. I am just doing it almost as soon as the information gets to the cortex.

Theoretically novelty is an indication of a learning opportunity.

But to monitor entropy I need to operate in bulk, comparing the entropy of each input with all the others around it. I can then relatively easily gate the lowest entropy locations so that the lowest entropy triggers the gating of the outputs this is probably an early example of salience.


OK, I was disappointed with cosine similarity it doesn’t do what I want for bulk similarity detection. I stumbled across a possible way of getting around the range problem. In essence as the matrix gets more complex the results get out of range. The more complex that matrix the more it gets out of range of the expression limit of the neuron. This is probably because it depends on a product. What I need is a way of taking the bulk similarity of a specific row in the matrix. The solution might be taking a partial derivative of the cosine similarity. of the whole matrix. Apparently it can be tuned to pick out the relative similarity of any particular row in the matrix.