How to implement heaviside step function as a new nengo object?

I am trying to implement a network that uses the Heaviside step function as an activation function. So, I created a new neuron type using an example from here Adding new objects to Nengo — Nengo 3.2.0.dev0 docs. My implementation is following:

# Neuron types must subclass `nengo.neurons.NeuronType`

class SmoothStep(nengo.neurons.NeuronType):

    # We don't need any additional parameters here;

    # gain and bias are sufficient. But, if we wanted

    # more parameters, we could accept them by creating

    # an __init__ method.

    def gain_bias(self, max_rates=0, intercepts=0):

        """Return gain and bias given maximum firing rate and x-intercept."""

        return np.array([1.]), np.array([0.])

    def step(self, dt, J, output):

        output[...] = numpy.heaviside(J,0.)

But due to this function when performing nengo_dl.Simulator simulations, TensorFlow is throwing the following error.

UserWarning: <class 'nengo_utils.SmoothStep'> does not have a native TensorFlow implementation; falling back to Python implementation.

line 1812, in _create_c_op
    c_op = pywrap_tf_session.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimensions must be equal, but are 1580 and 2090 for '{{node TensorGraph/while/iteration_0/CopyBuilder/BroadcastTo}} = BroadcastTo[T=DT_FLOAT, Tidx=DT_INT32](TensorGraph/while/Identity_7, TensorGraph/while/iteration_0/CopyBuilder/BroadcastTo/shape)' with input shapes: [1580], [2] and with input tensors computed as partial shapes: input[1] = [32,2090].

It looks like having a corresponding TF implementation of this would help but I don’t know how to register it with the python version from my application code.
Also is there any other clean way to do this?

Hi @joshivm22,

From what I see in the code, the warning UserWarning: <class 'nengo_utils.SmoothStep'> does not have a native TensorFlow implementation; falling back to Python implementation. should not cause the simulation to halt at all, rather, NengoDL will just use the Python implementation of the neuron rather than a native TensorFlow implementation. The natively supported TensorFlow implementations of neurons can be found in the TF docs, or in the NengoDL codebase.

As an example, if I run this code:

import nengo
import nengo_dl
import numpy as np
from nengo.neurons import NeuronType

# Custom heaviside neuron type for Nengo
class CustomNeuron(NeuronType):
    def gain_bias(self, max_rates=0, intercepts=0):
        return np.array([1.0]), np.array([0.0])

    def step(self, dt, J, output):
        output[...] = np.heaviside(J, 0.0)

# Test Nengo network with 1 neuron
with nengo.Network() as model:
    in_node = nengo.Node(1)
    ens = nengo.Ensemble(1, 1, neuron_type=CustomNeuron())
    nengo.Connection(in_node, ens)

# Run the Nengo simulation
with nengo.Simulator(model) as sim:

# Create and run the NengoDL simulation
with nengo_dl.Simulator(model) as sim_dl:

I get the does not have a native TensorFlow implementation warning, but the simulation completes without error.

If you could provide some more context (or the code) about what you are doing with the NengoDL simulator object, we can probably pin point exactly what is causing the tensorflow.python.framework.errors_impl.InvalidArgumentError error that you are encountering.

Adding custom neurons to NengoDL

As an aside, to add a custom Nengo neuron to NengoDL, you’ll need to define a custom TFNeuronBuilder class, and you can find examples of them in the NengoDL codebase. Then you’ll need to modify NengoDL’s SimNeuronsBuilder class to make use of your custom TFNeuronBuilder class. Here’s a code example that does both:

# Custom neuron builder for NengoDL
class CustomNeuronBuilder(TFNeuronBuilder):
    def build_pre(self, signals, config):
        super().build_pre(signals, config)

    def step(self, J, dt):
        return tf.nn.tanh(J)  # Replace with your own heaviside implementation

# Modify NengoDL's neuron builder to recognize our custom Nengo neuron.
# `SimNeuronsBuilder` contains a dictionary (`TF_NEURON_IMPL`) that maps a neuron class
# (dictionary key) to an associated `TFNeuronBuilder` class that does the build process
# (dictionary value).
SimNeuronsBuilder.TF_NEURON_IMPL[CustomNeuron] = CustomNeuronBuilder

And here’s the full code that you can play around with: (1.4 KB)

The heaviside function

While I was looking into your error, I did some searching for heaviside activation function implementations in TF, and ran across this article. It would seem that the heaviside function itself is not good to use in TF because it does not have a useful gradient, and since TF uses the function’s gradient for training weights, the heaviside function severely hampers the training process. If you want to have an activation like the heaviside function, I’d recommend checking out the tanh or sigmoid activation functions, both of which are natively supported by NengoDL.

Hi @xchoo, your answer has been extremely helpful. I agree that instead of implementing the Heaviside function, I should use the Tanh with an appropriate temperature value to approximate the Heaviside function. And thanks for the sample code, it is extremely helpful and works.