Use of probes and filtered probes

Hey,
So I am trying to mimic the spiking mnist nengoDL example. The use of out_p and out_p_filt, in this case, works great and gives a useful understanding of the neurons coming to a solution.

I’m trying to adapt it into a regression problem, where a histogram goes through 4 dense layers (3 encodings down and 1 to decode up to a 4096 vector, that gets reshaped into a 64x64 image).
Keras/TF is able to solve the problems and I have managed to get the model all the way through the porting process into nengo, I just can’t seem to make the jump between the “integrating” and “optimising”.

specifically, whenever i try to add a out_p_filt like probe i get the error message “”“ValueError: No data provided for “out_p_filt”. Need data for each key in: [‘out_p’, ‘out_p_filt’]”"""

Even though the same setup with the spiking mnist works fine??

Any thoughts?

Hi Paul,

I have a few ideas what could cause this, could you please post your code to help me identify the cause? or at the very least post lines for the probe target, the probe, and the simulator/compile call.

Thanks Ben,
i did think it could be something to do with the compile as it the only bit that changes really, below is my code.
i might try a few different loss functions and optimisers in the mean time.

with nengo.Network(seed=0) as net:
# copied from example
net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([100])
net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
net.config[nengo.Connection].synapse = None
neuron_type = nengo.LIF(amplitude=0.01)

# example
nengo_dl.configure_settings(stateful=False)

# the input node that will be used to feed in input histogram
inp = nengo.Node(np.zeros(7999))

# I've tried this way and without the neuron_type 
hidden = nengo_dl.Layer(
    tf.keras.layers.Dense(units=1024, activation=tf.nn.relu))(inp)
hidden = nengo_dl.Layer(neuron_type)(hidden)

hidden = nengo_dl.Layer(
    tf.keras.layers.Dense(units=512, activation=tf.nn.relu))(hidden)
hidden = nengo_dl.Layer(neuron_type)(hidden)

hidden = nengo_dl.Layer(
    tf.keras.layers.Dense(units=256, activation=tf.nn.relu))(hidden)
hidden = nengo_dl.Layer(neuron_type)(hidden)

out = nengo_dl.Layer(
    tf.keras.layers.Dense(units=4096))(hidden)

# we'll create two different output probes, one with a filter
# (for when we're simulating the network over time and
# accumulating spikes), and one without (for when we're
# training the network using a rate-based approximation)
out_p = nengo.Probe(out, label="out_p")
# out_p_filt = nengo.Probe(out, synapse=0.1, label="out_p_filt")


minibatch_size = 200
sim = nengo_dl.Simulator(net, minibatch_size=minibatch_size)

# add single timestep to training data
train_histograms = train_histograms[:, None, :]
train_images = train_images[:, None, :]
# when testing our network with spiking neurons we will need to run it
# over time, so we repeat the input/target data for a number of
# timesteps.
n_steps = 300
test_images = np.tile(test_images[:, None, :], (1, n_steps, 1))
test_histograms = np.tile(test_histograms[:, None, :], (1, n_steps, 1))

# def classification_accuracy(y_true, y_pred):
#     return tf.keras.metrics.Accuracy(
#         y_true[:, -1], y_pred[:, -1])
#

# note that we use `out_p_filt` when testing (to reduce the spike noise)
# sim.compile(loss={out_p_filt: classification_accuracy})
# sim.compile(optimizer=tf.optimizers.Adam(),
#                 loss='mse',
#                 # tf.losses.SparseCategoricalCrossentropy(from_logits=True),tf.losses.CategoricalCrossentropy(from_logits=False, label_smoothing=0),
#                 metrics=["accuracy"])
# print("accuracy before training:",
#       sim.evaluate(test_histograms, {out_p: test_images}, verbose=0)["loss"])


do_training = True
if do_training:
# run training
sim.compile(optimizer=tf.optimizers.Adam(),
            loss='mse',
            # tf.losses.SparseCategoricalCrossentropy(from_logits=True),tf.losses.CategoricalCrossentropy(from_logits=False, label_smoothing=0),
            metrics=["accuracy"])
# sim.compile(
#     optimizer=tf.optimizers.RMSprop(0.001),
#     loss={out_p: tf.keras.losses.MeanSquaredError()}
# )
sim.fit(train_histograms, {out_p: train_images}, epochs=1000)

# save the parameters to file
sim.save_params("./model_allphotons_{}".format(datetime.datetime.today()))
else:
# download pretrained weights
#urlretrieve(
    # "https://drive.google.com/uc?export=download&"
    # "id=1l5aivQljFoXzPP5JVccdFXbOYRv3BCJR",
    # "mnist_params.npz")

# load parameters
sim.load_params("./model_allphotons_2020-03-17 12:24:15.238096")
# sim.load_params("./model_9500photons_2020-03-17 00:52:19.731343")

# sim.compile(loss={out_p_filt: classification_accuracy})
# print("accuracy after training:",
#       sim.evaluate(test_histograms, {out_p: test_images}, verbose=0)["loss"])

data = sim.predict(test_histograms[:minibatch_size])

I haven’t had a chance to reproduce this yet, I’ll take a look when I get some time tomorrow if my suggestion doesn’t fix the problem for you!

I noticed you had commented out the

# sim.compile(
#     optimizer=tf.optimizers.RMSprop(0.001),
#     loss={out_p: tf.keras.losses.MeanSquaredError()}
# )

in favour of

sim.compile(optimizer=tf.optimizers.Adam(),
            loss='mse',
            # tf.losses.SparseCategoricalCrossentropy(from_logits=True),tf.losses.CategoricalCrossentropy(from_logits=False, label_smoothing=0),
            metrics=["accuracy"])

There is an important distinction between loss=function and loss={out_p: function}. The former is saying “I want to use this loss function for all probes”, so you’d have to provide data for all probes which may be causing this error.

Let me know if this helps, I will have time to try and reproduce this tomorrow!

Thanks Ben, Spot on that was it!

Thanks for the help, i knew it was something around the compile just didn’t know what.

i’ll have a few other questions later but this has got me going at least.

One thing i am wondering just now is the difference between the neuron types, as the ANN version of the model give a pretty decent result in the end while i am struggling to replicate this answer within a time input based SNN. Altering the LIF amplitude has seemed to make the answer close from lowering the amplitude.
Should i start with a rectified linear then move on up in the world?

Cheers

Hi Paul,

This is a pretty general question, and I’ll admit I’m not an expert. I’ll point you to a couple resources that may provide some more information and if you have a more pointed question in the future we may be able to help more!

  • NengoDL has some tips in the docs
  • Here’s a great blog post from our own Travis DeWolf

I hope this helps

The blog post is super helpful actually.

Thanks again Ben