[Nengo]: Why does the intercept change when radius is changed?

So for the following code where I have radius=1:

model = nengo.Network()
with model:
ens_1d = nengo.Ensemble(2, dimensions=1, radius=1, intercepts=np.array([0.5, -0.5]),
encoders=[[1], [-1]], max_rates=nengo.dists.Uniform(100, 150))
with nengo.Simulator(model) as sim:
eval_points, activities = tuning_curves(ens_1d, sim)

plt.figure()
plt.plot(eval_points, activities)
plt.ylabel("Firing rate (Hz)")
plt.xlabel("Input scalar, x")


following is the Tuning Curve output:

whereas for the following code with everything same, except for radius=10,

model = nengo.Network()
with model:
ens_1d = nengo.Ensemble(2, dimensions=1, radius=10, intercepts=np.array([0.5, -0.5]),
encoders=[[1], [-1]], max_rates=nengo.dists.Uniform(100, 150))
with nengo.Simulator(model) as sim:
eval_points, activities = tuning_curves(ens_1d, sim)

plt.figure()
plt.plot(eval_points, activities)
plt.ylabel("Firing rate (Hz)")
plt.xlabel("Input scalar, x")


following is the Tuning Curve output:

As you can see, the intercept is fixed at 0.5 in both code snippets, but the intercept in 2nd plot is at 5.0, why is that so? Do the intercepts scale with the radius value? I also see that the -ve intercepts i.e. -0.5 is not honoured. Please let me know. Thanks!

For nengo.Ensembles in which you connect to the ensemble, and not the ensemble’s neurons, they are running in “NEF-mode”. Conceptually, the computation performed by a NEF ensemble can be broken down into three phases: encoding, the neuron non-linearity, and decoding.

The encoding phase converts the post-synaptic vector value (which is specified by the nengo.Connection, which is a separate object) into the input current for the neuron. For nengo.Ensembles, the encoders and normalize_encoders initialization parameters are used in this phase. The dimensions parameter is also indirectly used here since that determines the dimensionality of the encoders.

The neuron non-linearity phase takes the input current and puts it through the neuron non-linearity to figure out if a spike should occur or not. For nengo.Ensembles, the intercepts, max_rates, gain, bias and neuron_type initialization parameters are used to determine the shape and response of the neuron non-linearity used for this phase.

The decoding phase converts the spikes (or firing rates for rate-mode neurons) back into vector space. For nengo.Ensembles, the eval_points initialization parameter is used in this phase to compute the decoders for this ensemble.

For parameters involved in the encoding and neuron non-linearity phase of the computation, it is assumed to be specified using a unit vector basis. This means all values are assumed to be specified within a range of -1 to 1. For the parameters in the decoding phase, it is assumed to be specified using the radius basis, i.e., specified within the range of -radius to radius.

Now let’s understand what the radius parameter does, and why it is needed. Since the encoding and neuron phases of the ensemble computation assume a unit vector basis, there is an issue that arises when the user wants to get the ensemble to represent values that are greater than the unit vector. If the user simply feeds the raw input vector into the ensemble, the neurons would saturate, and the representation by the neuron would not be very accurate. So, what can the user do? The obvious solution is to scale the input vector by some fixed value before feeding it to the ensemble. The scaling value would have to be chosen such that for all expected input values, the value being passed to the neuron (i.e., post scaling) would be within this -1 to 1 range. But if the user applies this scaling value on the input, they’ll also need to apply the same scaling value on the output in order for there to be no effective change to the vector values being represented by the ensemble. Requiring the user to do this (both the input and output scaling) is tedious, and non-intuitive, so we introduced the radius parameter as a shortcut that handles this scaling for you.

So, how does this affect your tuning curve? If you specify the intercepts for your neurons to be at 0.5, Nengo will set the intercepts for your neurons to be at 0.5, using a unit basis. If we set the radius to be a non-unit value, the input scaling done by the radius changes the effective value of the intercept. As an example, if the radius was set to 10, all inputs to the ensemble would be scaled by 1/10 before being processed by the neuron non-linearity. Thus, for the neuron’s input current to be 0.5 (i.e., where the intercept is), the vector input would have to be 5 (i.e., 0.5 * 10), which is what you observe in your graph.

This is another consequence of how the ensemble computation is split up into the three conceptual parts. If you look at it closely (as we will in a bit), you will see that the negative intercept actually is being respected by the Nengo code.

Recall how I described the neuron non-linearity phase above. I mentioned that in this phase, the computations are being done in the neuron “current” phase, where values here are analogous to currents in real neurons. Since all of the neurons in the ensemble are of the same type, you’d expect that the neurons will reach their maximum firing rate either going in the positive $x$ direction, or in the negative $x$ direction. For LIF neurons, the firing rate of neurons always increases in the positive $x$ direction.

When you define an intercept for a neuron, it will respect this convention, where positive $x$ denotes an increase in input current. Note that a negative $x$ value doesn’t necessarily mean negative current, because there are bias currents that are not outwardly visible to the user – remember that in this space, everything is just normalized to be between -1 and 1.

Okay, so how does this explain what you see in your tuning curve plot? The tuning curve plots the ensemble’s response to inputs in vector space. This means that it takes the neuron’s encoders into account when making this plot. The second neuron in your ensemble has an encoder of [-1], which means that it is more active as the input vector goes in the $-x$ direction. But, recall that the intercepts are defined in the neuron current space, which means that the intercept you see on the tuning curve must take into account the neuron’s encoder. For the first tuning curve plot (where radius==1), you specified the intercept of the second neuron to be -0.5. The question is, for what input vector value does the second neuron reach this intercept in current space?

Let’s choose a random input vector, say [0.2]. After the encoding phase, this input vector gets turned into a current value by multiplying it with the neuron’s encoder. The current value being passed to the neuron’s non-linearity is then np.dot([0.2], [-1]) = -0.2. So… for what input vector value does the neuron’s current equal -0.5 (which is the intercept)? Doing the reverse calculation, we see that it’s 0.5, which is exactly what we see in the tuning curve plot! Essentially, the minus sign in the encoder and the minus sign in the intercept value have cancelled out, resulting in the tuning curve plot you see.

If you want to plot the neuron’s response in the neuron’s current space, what you’ll want to use is the response_curve function. When you use this function to plot the ensemble’s response curve, it ignores all of the encoders. The following plot compares the tuning curve vs the response curve for the network you provided above:

As a side note:

We use the term “tuning curve” and “response curve” to differentiate the vector-based tuning (i.e., a neuron can be thought to be “tuned” to be sensitive to a certain input vector), and the current-based response (i.e., a neuron can be thought to be “responding” to an input current stimulus) of the ensemble of neurons. Also note that the tuning curve doesn’t have to be constrained to 1D, and can be used for multi-dimensional ensembles (although, the most you can plot is 2D, since the tuning curves for a 2D ensemble is a 3D plot). This is an example of a tuning curve for one neuron in a 2D ensemble (taken from @tcstewar’s course notes for the NEF course, available here!):

1 Like

I see… the above resolves my doubt. In the tuning curve plots we see the intercept 5 in the vector space which is internally represented as 0.5 - the desired intercept in range -1 to 1.

With respect to the following,

when you talk about remember that in this space, does it refer to vector space where the expected/default values are between -1 and 1 or the current space? The current could have any value greater than or equal to 0… right? I guess I am wrong here… I can see in response_curve plot that the current can be negative as well, but the concept of negative current is difficult to digest; can you please explain it biophysically? I only thought that a positive current can lead to spiking.

With respect to the following:

apologies but I find the following statement contradictory

I thought that intercepts (a parameter in neuron non-linearity phase) are defined in vector space. If at all the intercepts are defined in current space then obviously in case of tuning curves, the vector value of -0.5 will be 0.5 in current space for the second neuron since its encoder is -1. Thus the following resolves my doubt.

And thanks for the source by @tcstewar!

For that, I was referring to the current space.

This is what I was trying describe when I said that the current space is normalized between -1 and 1. To put it in biological perspective, let’s take 2 example neurons. Both neurons have an activation threshold of -55mV, and a resting membrane potential of -70mV. So, if they receive enough current for their membrane voltages to go from -70mV to -55mV, they will emit a spike.

For neuron 1, we do not add any bias voltage, so the current needed to cause the neuron to spike is $15mV/R_{mem}$ (where $R_{mem}$ is the membrane resistance). Just for the sake of making numbers concrete, let’s say $R_{mem} = 1$, so the current needed to cause the first neuron to spike is 15mA.

Now, for neuron 2, let’s apply an inhibitory bias voltage (biologically, this could be due to the environment the neuron is in, or could be due to another neuron [that’s not in the network] providing it constant input) of -20mV. For neuron 2 then, the current needed to cause it spike would be 35mA.

So, in our little example, we know the firing threshold for the first neuron to be 15mA, and the second neuron to be 35mA. If we want to work with these number though, it gets a little cumbersome to deal with these numbers since they are on an arbitrary scale. So, what we do is we normalize these numbers to be within some scale. This scale in Nengo is from -1 to 1. For our example, let’s set 15mA to be “0”, and 50mA to be “1”. Then, the equivalent firing threshold for the first neuron would be at 0, and the second neuron would be at 0.4.

Given this example, what would a “negative” current be? Well, it would be if the neuron was receiving a excitatory bias voltage (or bias current), that makes it easier to generate a spike.

I suppose I did misspeak. To me, there is no difference between a vector and a scalar (a scalar is just a single dimensional vector). But, to be precise, the encoding phase is defined in vector space (i.e., within the unit hypersphere), and the neuron non-linearity phase is defined in scalar (current) space (i.e., from -1 to 1).

Hello @xchoo,

With respect to the following,

I understand that we need to normalize arbitrary currents in some range, and the Nengo chosen range is [-1, 1]. However, (1): How do we decide upon the minimum and maximum currents required for each neuron? With J=$\alpha \times <e, x>$ + J_bias (where $<e, x>$ is the dot product between $e$ and $x$, the value of current J can take any value depending upon $x$ of course after randomly choosing the value of $\alpha$ and J_bias in a specified range. (2): I am bit confused about you mentioning the normalized range [-1, 1] and then using the range [0, 1] for your example. Even in [0, 1] range, shouldn’t the normalized value of 35mA for the second neuron be (35 - 15)/(50 - 15) = 0.57 instead of 0.4? Ref: (x - x_min) / (x_max - x_min).

With respect to the following,

are you saying that had the second neuron been receiving +20mV, then it would require negative current to spike? From the example in your previous reply, the second neuron if it receives +20mV then its potential will be at -50mV, higher than -55mV, thus it will keep spiking (following its dynamics). Or even a +1mV excitatory bias will mean that the second neuron will be at -69mV, thus any positive current (provided it is applied for a sufficiently long time) can help it spike. Please resolve this confusion too about the negative current.

The minimum and maximum currents would be determined by the desired firing rate at $x=1$.

My example is using the normalized range, although, I admit I didn’t check the numbers correctly (I just made up random numbers ), so it may not seem that way. Regardless, the point of my example was to show that a “negative” current is only negative w.r.t. a baseline current. In actual fact, the current is not negative. If I were to redo the example, I would have probably chosen +/- 5mV (from baseline) instead of +/-20mV.

Yes, you are correct. See previous note about not checking my numbers correctly.

No. If the second neuron had been receiving +20mV, then it would not have needed any additional current to spike at all. It would just spike non-stop (until the additional current is removed).

This is correct.

This depends on the dynamics of the neuron membrane. The membrane is typically “leaky”, so the input current needs to overcome this leak to build up charge across the membrane.

With respect to the above, I have put more thoughts to understand what’s happening here. As I mentioned my above equation for current $J$, I see that it can be negative as well, depending on the value of randomly chosen encoder $e$, $\alpha$ and bias, over a uniform range of $x$ i.e. [-1, 1]. Now, depending upon the neuronal dynamics, where… say $f(J)$ denotes the firing rate of neuron given an input current $J$, we can define on what range of $J$ does the neuron fire, e.g. for $f(J) = max(J, 0)$ i.e. a ReLU neuron, its non-zero firing activity (or rate) is defined only on the range of positive currents. For LIF $f(J)$ too, we can do the same, and define non-zero positive output of $f(J)$ on positive range of current values (it makes no sense of define non-zero output of $f(J)$ on negative $J$s as it will not increase the membrane potential).

However, during actual computation with varied range of absolute $J$ (all positive values), Nengo normalizes them between range -1 and 1, just like we normalize image pixels (with absolute range 0 to 255) in range -1 to 1. I guess, this is the (relative) negative current you are mentioning. Right?

Now for a neuron, $f(J)$ has a output at some $J$; $J$ corresponding to $x=1$. Let’s suppose we know that desired output at $x=1$ (which is the maximum firing rate as you explained in one of the earlier posts), we can obtain gain ($\alpha$) and bias, given we know the intercept for that neuron. Since the $\alpha$ and bias corresponded to maximum firing rate at $x=1$, we can use that $\alpha$ and bias to obtain the maximum current $J$ (for that neuron) as we already have the rest of the variables: $x$ and $e$ in equation $J = \alpha \times <e, x> + bias$ where $x=1$ and $e=1$ for preferred direction along positive $x$. So I can make sense of the maximum current $J$ in this way (hopefully I am not wrong here, please do correct me if I am). About the minimum current, it is straightaway 0 if we don’t care whether the neuron fires or not; I mean if the neuron is ReLU, minimum current $J$ is 0+, and if the neuron is LIF, then minimum $J$ should be such that it helps overcome the leaky potential and enables the neuron to fire a minimum number of spikes in one second. Is it? Please let me know!

Yup, your calculations are correct (from the brief read of it). One comment I will make is about the minimum current. That current is defined as the firing threshold current (which is defined by the intercept when you create the ensemble). For both ReLU and LIF neurons, it’s the point where it goes from not-firing to firing at some non-zero rate.

Thank you @xchoo for confirming!