Reproducing tuning curves in NumPy

I’m going to be using Nengo in my neural nets course in a few weeks, and I want to give my students an understanding of how tuning curves work. Following the “What I cannot create I do not understand” approach, I’d like to have the students use numpy and matplotlib to generate and plot a set of tuning curves like the ones in the Nengo logo, or the ones in the documentation, without using the built-in Nengo tuning_curves libary. If I understand Terry’s intro NEF article correctly, I should be able to come up with a function G() such that G(alpha * e * x + b) will produce such curves, with positive e giving an ascending curve, and negative e giving a decending curve. I played around a bit with np.log(), but I’m sure there’s a better way to do this.

This seems to work pretty well:

def G(v):

    g = np.log(np.abs(v))

    g[v<0] = 0
    g[g<0] = 0

    return  g

You might be interested in the lines from LIFRate.step_math:

    def step_math(self, dt, J, output):
        """Implement the LIFRate nonlinearity."""
        j = J - 1
        output[:] = 0  # faster than output[j <= 0] = 0
        output[j > 0] = self.amplitude / (
            self.tau_ref + self.tau_rc * np.log1p(1. / j[j > 0]))
        # the above line is designed to throw an error if any j is nan
        # (nan > 0 -> error), and not pass x < -1 to log1p

The current J is set to gain * e.dot(x) + bias, where the gain and bias are solved to obtain (by default) a uniform distribution of intercepts and max_rates (the firing rate when e.dot(x) == 1). This last bit is accomplished by LIFRate.gain_bias, and can be derived by rearranging the equation from step_math:

    def gain_bias(self, max_rates, intercepts):
        """Analytically determine gain, bias."""
        max_rates = np.array(max_rates, dtype=float, copy=False, ndmin=1)
        intercepts = np.array(intercepts, dtype=float, copy=False, ndmin=1)

        inv_tau_ref = 1. / self.tau_ref if self.tau_ref > 0 else np.inf
        if np.any(max_rates > inv_tau_ref):
            raise ValidationError("Max rates must be below the inverse "
                                  "refractory period (%0.3f)" % inv_tau_ref,
                                  attr='max_rates', obj=self)

        x = 1.0 / (1 - np.exp(
            (self.tau_ref - (1.0 / max_rates)) / self.tau_rc))
        gain = (1 - x) / (intercepts - 1.0)
        bias = 1 - gain * intercepts
        return gain, bias
1 Like

Thanks, Aaron; this really clears things up.

By the way, for sake of completeness, and just in case some of your students might benefit from a derivation, here is one that I think is suitable for those from an engineering / calculus background with a working knowledge of ODEs:


The sub-threshold ($0 < v(t) < 1$) dynamics of the LIF model follow the first-order differential equation:

$$\tau_\text{rc} \dot{v}(t) + v(t) = j(t)$$

If coming from a control theory background, this is sometimes written in the Laplace domain as a lowpass filter:

$$\frac{V(s)}{J(s)} = \frac{1}{\tau_\text{rc} s + 1}$$

In the time-domain, we can rearrange into the more familiar form of a “leaky integrator”:

$$\dot{v}(t) = \frac{1}{\tau_\text{rc}} \left( j(t) - v(t) \right)$$

and solve this ODE given a constant input current $j(t) = u$:

$$v(t) = v(0) + \left( u - v(0) \right) \left(1 - e^{-\frac{t}{\tau_\text{rc}}} \right) $$

Now we want to figure out how much time it takes for the neuron to spike given $u$. Assuming we aren’t in the refractory period, we can do this by setting $v(0) = 0$, $v(t) = 1$, and rearranging for $t$:

$$
\begin{align}
1 &= u \left(1 - e^{-\frac{t}{\tau_\text{rc}}} \right) \\
\frac{1}{u} &= 1 - e^{-\frac{t}{\tau_\text{rc}}} \\
\log e^{-\frac{t}{\tau_\text{rc}}} &= \log \left( 1 - \frac{1}{u} \right)\\
t &= -\tau_\text{rc} \log \left( 1 - \frac{1}{u} \right)
\end{align}
$$

For subtle numerical reasons related to the use of np.log1p, we further rearrange this as follows:
$$
\begin{align}
t &= - \tau_\text{rc} \log \left( \frac{u - 1}{u} \right) \\
&= \tau_\text{rc} \log \left[ \left( \frac{u - 1}{u} \right)^{-1} \right] \\
&= \tau_\text{rc} \log \left( \frac{u}{u - 1} \right) \\
&= \tau_\text{rc} \log \left( \frac{u - 1 + 1}{u - 1} \right) \\
&= \tau_\text{rc} \log \left( 1 + \frac{1}{u - 1} \right)
\end{align}
$$

In either case, the total time to spike, including the refractory period, is $\tau_\text{ref} + t$. And since frequency is the reciprocal of period, the spike-rate in response to $u$ becomes:

$$
r(u) = \frac{1}{\tau_\text{ref} + \tau_\text{rc} \texttt{log1p} \left( \frac{1}{u - 1} \right)} \text{.} \qquad \square
$$


Feel free to use / repurpose this however you see fit. :slight_smile:

1 Like

Thanks, again Aaron. I’m not sure there are any students in the class (all undergrads) who can follow the math, but I’ll definitely benefit from studying it myself.

Hi arvoelke,

I’m confused with how LIF neurons are modelled in Nengo.

shouldnt the differential equation be:
cv˙(t)+v(t)/R=j(t)
so that:
τrcv˙(t)+v(t)=Rj(t)
What happened to the parallel resistance?

I have a CMOS circuit implementation of a LIF neuron and I have simulated it in SPICE as well as in a python library called Brian2. I wanted to train a LIF neurall network using the electrical parameters of the CMOS circuit. I know R, C refractory period, resting potential, and threshold. How do I input these parameters into the LIF neuron in Nengo. I can only modify RC, rand refractory period. I want to be able to set R and C individually since R determines the steady state membrane voltage for a certain input current.

Thank you very much

τrcv˙(t)+v(t)=Rj(t) is identical to the equation I posted with Rj(t) as the input so there is really no difference except for a constant scaling term that we in effect absorb into the gain calculation on the neurons. And so it might just be a matter of setting the neuron gain appropriately; you are free to interpret and scale these values however you please. (Replying on phone)

Thank you very much. I see, the gain and the bias absorb the R term anyway.

Another thing, the input x to the neuron in J(t) = ax(t) + b is normalized between 0 to 1 right? a is the gain and b is the bias. From the documentation x is actually: e dot x
How does that work? The way I see it is that, x is the weighted sum of the neuron outputs in the preceding layer. Aren’t the neuron outputs of the preceding layer are set by the amplitude parameter of the class LIF? Thanks in advance

It’s $v(t)$ that is normalized to be within $[0, 1)$, through the LIF.step logic of spiking when it reaches 1 and then resetting to 0 (although the reset potentional can be changed with the min_voltage parameter). The input current, $j(t)$, can be anything really.

This depends on the kind of connection.

If you are using NEF connections, where you are connecting into an ensemble, then yes $j_i(t) = \alpha \left\langle \mathbf{x}(t), \mathbf{e}_i \right\rangle + \beta$ is a linear projection of $\mathbf{x}(t)$, where $\mathbf{e}_i$ is the “encoder” for the $i^\text{th}$ neuron. What $\mathbf{x}(t)$ is will depend on context (i.e., where the connection came from). If it came from another ensemble, then $\mathbf{x}(t)$ will be a “decoded” vector using the first two principles of the NEF (*). If it came from a node or directly from pre.neurons then $\mathbf{x}(t)$ will be the vector output of the node or the neural activities, respectively.

However, if you are connecting directly into post.neurons, then the dot product with the encoders is skipped, and it instead becomes $j_i(t) = \alpha x_i(t) + \beta$ for the $i^\text{th}$ neuron, such that $\mathbf{x}(t)$ needs to have as many dimensions as there are neurons in the post object.

There is a lot more information in the docs, e.g., on the page Nengo connections in depth.

Yes, this is correct if you are connecting from one ensemble to another ensemble using NEF connections. It will also be filtered if you are specifying a synapse on that connection.

Strictly speaking, the amplitude indeed affects the neuron outputs from the preceding layer. But when using NEF connections, the $\mathbf{x}(t)$ vector is determined as a weighted sum of activities from the preceding layer using the “decoders” mentioned above (*). These weights are solved from an offline batched least-squared optimization problem according to the function passed into the corresponding connection. An important concept in the NEF is that any connection that has decoders followed by encoders is mathematically equivalent to using weights that are the outer-product of the encoders and decoders (or, conversely, the encoders and decoders are low-rank “factors” of the underlying connection weights).

(*) The NEF summary for Nengo may be helpful for more information here. My thesis (specifically, sections 2.2 and 3) also explains this with a lot more technical depth if you are curious.

If you have any specific questions about anything I’ve said here please feel free to ask follow-up questions in this thread. But if you have any entirely new questions you can start a new thread to help keep things organized and searchable. :slight_smile: Cheers.

But, e and x are normalized right? It says here: https://www.nengo.ai/nengo/_modules/nengo/ensemble.html
in the definition of max_rates,

max_rates : Distribution or (n_neurons,) array_like, optional
The activity of each neuron when the input signal x is magnitude 1
and aligned with that neuron’s encoder e;
i.e., when dot(x, e) = 1.

Hi again @iampaulq – sorry for the delay in getting back to you. The encoders $e$ are indeed normalized (by default), but $x$ is not. The definition for max_rates that you referenced is just that: a definition. It is saying that in the case when $x$ happens to be normalized (have an L2 norm of 1), and is pointing in the same direction as the encoder, then the corresponding neuron will have the specified “max rate”. But this is not meant to imply that $x$ is always going to be normalized. Hope this helps.

Hi arvoelke, thanks for answering.

I managed to scale the neuron inputs instead of making a new neuron model with its own threshold voltage. I think my confusion with the nengo.neuron parameters have been cleared up. Right now, I am confused with how to interconnect different neuron groups/ensembles.

What I want to accomplish is to fully connect each neuron in layer 1 to the neurons in layer 2. I want the connection to be weighted only without additional functions like LPF or decoders. I’m not sure how to proceed.

In my hardware neurons, the output spikes are voltages and are sent to resistive connections to generate synaptic currents. The sum of all these plus some bias current is the total input J = J(sum) + Jbias. In Nengo, Jsum is gain(e dot x). If I could just connect the output of layer 1: out1 and feed it as input to layer 2, then it will be perfect. Jsum = gain * (e dot out1), where the encoder, e , acts as the weights.

Hi @iampaulq,

I may have answered your question in another post, but I’ll reply here as well just in case.

In Nengo, you can do neuron-neuron connections by using the .neurons attribute of any Nengo ensemble. As an example, if you had two populations (ens_a and ens_b) and you wanted to define a full connection weight matrix between the two, you’d do:

nengo.Connection(ens_a.neurons, ens_b.neurons, transform=weight_matrix)

Note that you are not restricted to creating fully connected weight matrices. Nengo allows you to mix and match the pre and post of the connections, as long as the given transformation matrix is sized appropriately.

For example:

nengo.Connection(ens_a, ens_b.neurons, transform=weight_matrix)

connects the decoded output of ens_a directly to the neurons of ens_b, bypassing the encoders of ens_b, but using the decoders of ens_a, while:

nengo.Connection(ens_a.neurons, ens_b, transform=weight_matrix)

connects the direct neuron output of ens_a to the encoded input of ens_b, bypassing the decoders of ens_a, but using the encoders of ens_b, and of course (for completeness):

nengo.Connection(ens_a, ens_b, transform=weight_matrix)

connects the decoded output of ens_a to the encoded input of ens_b, using both the decoders of ens_a and the encoders of ens_b.