I’m going to be using Nengo in my neural nets course in a few weeks, and I want to give my students an understanding of how tuning curves work. Following the “What I cannot create I do not understand” approach, I’d like to have the students use numpy and matplotlib to generate and plot a set of tuning curves like the ones in the Nengo logo, or the ones in the documentation, without using the built-in Nengo tuning_curves libary. If I understand Terry’s intro NEF article correctly, I should be able to come up with a function G() such that G(alpha * e * x + b) will produce such curves, with positive e giving an ascending curve, and negative e giving a decending curve. I played around a bit with np.log(), but I’m sure there’s a better way to do this.

# Reproducing tuning curves in NumPy

**simondlevy**#2

This seems to work pretty well:

```
def G(v):
g = np.log(np.abs(v))
g[v<0] = 0
g[g<0] = 0
return g
```

**arvoelke**#3

You might be interested in the lines from LIFRate.step_math:

```
def step_math(self, dt, J, output):
"""Implement the LIFRate nonlinearity."""
j = J - 1
output[:] = 0 # faster than output[j <= 0] = 0
output[j > 0] = self.amplitude / (
self.tau_ref + self.tau_rc * np.log1p(1. / j[j > 0]))
# the above line is designed to throw an error if any j is nan
# (nan > 0 -> error), and not pass x < -1 to log1p
```

The current `J`

is set to `gain * e.dot(x) + bias`

, where the `gain`

and `bias`

are solved to obtain (by default) a uniform distribution of `intercepts`

and `max_rates`

(the firing rate when `e.dot(x) == 1`

). This last bit is accomplished by LIFRate.gain_bias, and can be derived by rearranging the equation from `step_math`

:

```
def gain_bias(self, max_rates, intercepts):
"""Analytically determine gain, bias."""
max_rates = np.array(max_rates, dtype=float, copy=False, ndmin=1)
intercepts = np.array(intercepts, dtype=float, copy=False, ndmin=1)
inv_tau_ref = 1. / self.tau_ref if self.tau_ref > 0 else np.inf
if np.any(max_rates > inv_tau_ref):
raise ValidationError("Max rates must be below the inverse "
"refractory period (%0.3f)" % inv_tau_ref,
attr='max_rates', obj=self)
x = 1.0 / (1 - np.exp(
(self.tau_ref - (1.0 / max_rates)) / self.tau_rc))
gain = (1 - x) / (intercepts - 1.0)
bias = 1 - gain * intercepts
return gain, bias
```

**arvoelke**#5

By the way, for sake of completeness, and just in case some of your students might benefit from a derivation, here is one that I think is suitable for those from an engineering / calculus background with a working knowledge of ODEs:

The sub-threshold ($0 < v(t) < 1$) dynamics of the LIF model follow the first-order differential equation:

$$\tau_\text{rc} \dot{v}(t) + v(t) = j(t)$$

If coming from a control theory background, this is sometimes written in the Laplace domain as a lowpass filter:

$$\frac{V(s)}{J(s)} = \frac{1}{\tau_\text{rc} s + 1}$$

In the time-domain, we can rearrange into the more familiar form of a “leaky integrator”:

$$\dot{v}(t) = \frac{1}{\tau_\text{rc}} \left( j(t) - v(t) \right)$$

and solve this ODE given a constant input current $j(t) = u$:

$$v(t) = v(0) + \left( u - v(0) \right) \left(1 - e^{-\frac{t}{\tau_\text{rc}}} \right) $$

Now we want to figure out how much time it takes for the neuron to spike given $u$. Assuming we aren’t in the refractory period, we can do this by setting $v(0) = 0$, $v(t) = 1$, and rearranging for $t$:

$$

\begin{align}

1 &= u \left(1 - e^{-\frac{t}{\tau_\text{rc}}} \right) \\

\frac{1}{u} &= 1 - e^{-\frac{t}{\tau_\text{rc}}} \\

\log e^{-\frac{t}{\tau_\text{rc}}} &= \log \left( 1 - \frac{1}{u} \right)\\

t &= -\tau_\text{rc} \log \left( 1 - \frac{1}{u} \right)

\end{align}

$$

For subtle numerical reasons related to the use of `np.log1p`

, we further rearrange this as follows:

$$

\begin{align}

t &= - \tau_\text{rc} \log \left( \frac{u - 1}{u} \right) \\

&= \tau_\text{rc} \log \left[ \left( \frac{u - 1}{u} \right)^{-1} \right] \\

&= \tau_\text{rc} \log \left( \frac{u}{u - 1} \right) \\

&= \tau_\text{rc} \log \left( \frac{u - 1 + 1}{u - 1} \right) \\

&= \tau_\text{rc} \log \left( 1 + \frac{1}{u - 1} \right)

\end{align}

$$

In either case, the total time to spike, including the refractory period, is $\tau_\text{ref} + t$. And since frequency is the reciprocal of period, the spike-rate in response to $u$ becomes:

$$

r(u) = \frac{1}{\tau_\text{ref} + \tau_\text{rc} \texttt{log1p} \left( \frac{1}{u - 1} \right)} \text{.} \qquad \square

$$

Feel free to use / repurpose this however you see fit.

**simondlevy**#6

Thanks, again Aaron. I’m not sure there are any students in the class (all undergrads) who can follow the math, but I’ll definitely benefit from studying it myself.