[NengoDL] How to turn off a single neuron

Hello, I’m trying to modify a spiking CNN model and observe changes using NengoDL.
The model consists of several convolutional layers, each containing a Conv2D TensorNode and LIF activation.

...
# a convolutional layer
conv = nengo_dl.Layer(tf.keras.layers.Conv2D(...))
conv = nengo_dl.Layer(nengo.LIF(...))(conv)

# another convolutional layer
conv = nengo_dl.Layer(tf.keras.layers.Conv2D(...))
conv = nengo_dl.Layer(nengo.LIF(...))(conv)
...

It is trained and tested by fit, evaluate, predict methods of the nengo_dl.Simulator object.

Is there any method to deactivate one specific LIF neuron in a convolutional layer at testing phase?
Thank you in advance, and any references will be welcomed :slightly_smiling_face:

Hi @neuroshin! Apologies for the delay in responding to this question, it took me a while to figure out an elegant solution to your question (but I think I’ve managed it! :smiley:).

My solution revolves around using a Tensorflow Lambda layer to introduce a negative bias into the neurons you which to deactivate. The code below illustrates how this is accomplished.

import numpy as np
import nengo
import nengo_dl
import matplotlib.pyplot as plt
import tensorflow as tf


n_neurons = 10

with nengo.Network(seed=0) as model:
    # Define an inhibitory TF variable (so that we can modify it later)
    inhib_values = [[0.0] * n_neurons]
    inhib_var = tf.Variable(inhib_values)

    # Make an input node with sin waves
    inp = nengo.Node(output=lambda t: [np.sin(x * np.pi * t) for x in range(n_neurons)])

    # Insert the inhibitory layer, using a lambda function to add in the inhibitory bias
    inhib_layer = nengo_dl.Layer(tf.keras.layers.Lambda(lambda x: x + inhib_var))(inp)

    # The output (LIF) neuron layer
    out = nengo_dl.Layer(nengo.LIF())(inhib_layer)

    # Probes
    p_in = nengo.Probe(inp)
    p_out = nengo.Probe(out, synapse=0.01)


with nengo_dl.Simulator(model) as sim:
    # Compile the model
    sim.compile(
        optimizer=tf.optimizers.Adam(),
        loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
        metrics=["accuracy"],
    )
    # # Train the model (if desired)
    # sim.fit(...)

    # Inhibit the desired neurons by inserting a negative bias value
    inhib_values[0][5] = -10.0
    # Use the TF backend set_values method to change the value of the TF variable
    tf.keras.backend.set_value(inhib_var, inhib_values)

    # Run the simulation
    sim.run(1)


# Plot the probed data
plt.figure()
plt.plot(sim.trange(), sim.data[p_in])
plt.legend(range(n_neurons))
plt.figure()
plt.plot(sim.trange(), sim.data[p_out])
plt.legend(range(n_neurons))
plt.show()

When no inhibition is set, the output of p_out looks like this (take notice of line 5):
image
And when the inhibition is set (as in the code above), the output of p_out looks like this. Notice that neuron 5 is now silent.
image

As for your example, where you’d want to insert this lambda layer between the convolution and LIF layers. I.e.,

# a convolutional layer
conv = nengo_dl.Layer(tf.keras.layers.Conv2D(...))
conv = nengo_dl.Layer(tf.keras.layers.Lambda(lambda x: x + inhib_var))(conv)
conv = nengo_dl.Layer(nengo.LIF(...))(conv)

You can also inhibit neurons with a nengo.Node, like so (based on @xchoo’s code above, inhibiting the neuron with index 5):

import numpy as np
import nengo
import nengo_dl
import matplotlib.pyplot as plt
import tensorflow as tf


n_neurons = 10

with nengo.Network(seed=0) as model:
    # Make an input node with sin waves
    inp = nengo.Node(output=lambda t: [np.sin(x * np.pi * t) for x in range(n_neurons)])

    # The output (LIF) neuron layer
    out = nengo_dl.Layer(nengo.LIF())(inp)
    
    # Strongly inhibit a set of neurons given by the boolean inhibit_mask
    inhibit = nengo.Node(-1e6)
    inhibit_mask = np.zeros(n_neurons, dtype=bool)
    inhibit_mask[5] = True
    nengo.Connection(inhibit, out, transform=inhibit_mask[:, None], synapse=None)

    # Probes
    p_in = nengo.Probe(inp)
    p_out = nengo.Probe(out, synapse=0.01)

with nengo_dl.Simulator(model) as sim:
    # Compile the model
    sim.compile(
        optimizer=tf.optimizers.Adam(),
        loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
        metrics=["accuracy"],
    )
    # # Train the model (if desired)
    # sim.fit(...)

    # Run the simulation
    sim.run(1)


# Plot the probed data
plt.figure()
plt.plot(sim.trange(), sim.data[p_in])
plt.legend(range(n_neurons))
plt.figure()
plt.plot(sim.trange(), sim.data[p_out])
plt.legend(range(n_neurons))
plt.show()

However, note that inhibition does not work for all kinds of spiking neuron models. This strategy works for nengo.LIF(), but not for nengo.RegularSpiking(nengo.Tanh) for instance.

Also note that if you are training the network, then you’ll want to turn the inhibition off during training.
You can do this by setting inhibit.output = 0 before training and then setting inhibit.output = -1e6 after training. You might need to rebuild the nengo_dl.Simulator object for the change to inhibit.output to take effect though? Not entirely sure.

@xchoo @arvoelke
Thank you very much for the elegant solutions! I will try them…
I’m learning a lot in this forum :grinning: