Just starting with NEF, some basic questions


#1

Hi,
I’m a home hobbyist starting out on Nengo. I have a basic question on how it works. Here is the problem:

Usually a linear recombination of signals is a weighted sum of signals.
Biological neurons (at least in early theories) were supposed to have every spike of the same amplitude, so only the number of spikes per second could vary.

So lets say I have 2 neurons firing at the same frequency and I do a weighted combination where the first weight is 1, and the second is 1/2. Suppose this recombination is happening at a dendrite.
I would think that the first neuron would be contributing twice as much as the second.
Now lets suppose the first neuron is firing twice as fast as the second neuron, but both have equal weights. Again I would think the first neuron contributes twice as much as the second.
Is that true? Do time-constants and filtering affect this?

So if I want to recreate an input signal (that was feeding into these 2 neurons), then a weighted combination of their outputs will create a current (in biology this would be at a dendrite that the two dendrites synapse on) that imitates the input current over time?

And what happens if you have more than one dimension so you are trying to recreate both the amplitude and direction of a vector over time?
Thanks!


#2

What resource are you using to learn the NEF? Have you seen the course notes? I don’t want to send you away, but knowing where you’re coming from might help me understand your question.


#3

I started out by buying and starting to read “How to Build a Brain” by Chris Eliasmith, and I have an article that I haven’t look at yet, that was written a year or two afterwards. I have not taken any course, though thanks for showing me the link to the course notes.
I have a biochemistry degree, so I know something about how neurons work, and I used to code neural nets for a little company.
My question is based on my knowledge of “backpropagation”, where neural outputs are linearly combined as inputs to the next layer.
I’m trying to relate that to what I’m reading.
If you re-read the question, you’ll see I’m just asking for basics to speed me up in understanding Nengo.


#4

Thanks for letting me know your background! Although Nengo can be used with Deep Learning, the way Nengo creates neural nets isn’t easily related to back-prop, but I think you already know this.

So lets say I have 2 neurons firing at the same frequency and I do a weighted combination where the first weight is 1, and the second is 1/2.

I’m guessing you mean decoder weights.

Suppose this recombination is happening at a dendrite.

So before an encoder.

Now lets suppose the first neuron is firing twice as fast as the second neuron, but both have equal weights. Again I would think the first neuron contributes twice as much as the second.
Is that true?

neuron A = 21 = 1
neuron B = 1
0.5 = 0.5

If that’s what you meant, than it is true.

Do time-constants and filtering affect this?

Yes, but usually you use the same time-constant on all neurons from an ensemble, unless you’re trying to get some fancy dynamics. Time constants filter spikes into a continuous signal.

So if I want to recreate an input signal (that was feeding into these 2 neurons), then a weighted combination of their outputs will create a current (in biology this would be at a dendrite that the two dendrites synapse on) that imitates the input current over time?

Right. Linear of combinations of decoders can imitate the input that the neuron population is seeing and feed it into the next population.

What happens if you have more than one dimension so you are trying to recreate both the amplitude and direction of a vector over time?

You assume two-dimensionally tuned neurons and proceed as the one-dimensional case. This is covered at the end of the first lecture notes.

Does that answer most of your questions?


#5

Yes, thank you.


#6

Just to help clarify, this is true, and is indeed a consequence of the filtering over time. When those two spikes are accumulated by leaky integration (a lowpass filter acting as the model for the postsynaptic current) they will sum to twice the value of a single spike with the same weight. This is assuming that the two spikes occur nearby in time relative to the time course of the synaptic filter (in other words, that the firing rate is high relative to the [reciprocal of the] time-constant $\tau^{-1}$ used to model the exponential decay of the leak).

The interpretation of decoders as weights only holds roughly in the 1D case (after accounting for a possible change in sign for “on/off” neurons, a gain, and a bias current). In higher dimensions, the actual weight is a dot-product of a higher-dimensional decoder with a higher-dimensional encoder. This results in a conceptual distinction between the weight matrix and their factored forms used to understand the relevant computations. The course notes, and some of the Nengo notebook examples, are a good reference for understanding this in more detail.


Basic questions on how NEF represents quantities and why it can produce arbitrary functions