SNN for anomaly detection with inconsistent results

Hi!

I’m researching SNN for the anomaly detection task and until now I’ve tried a few strategies across three different datasets. I’ve used the same autoencoder network architecture for my experiments and the same optimization using backpropagation. This is my code for the network:

with net:
                net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([
                                                                          100])
                net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
                net.config[nengo.Connection].synapse = None
                neuron_type = nengo.LIF(amplitude=0.01)

                # add input node
                input_node = nengo.Node([0] * x_train.shape[1])

                # add first layer
                hidden_1 = nengo.Ensemble(
                    n_neurons=64,
                    dimensions=1,
                    neuron_type=nengo.LIF(amplitude=0.01),
                    max_rates=nengo.dists.Choice([100]),
                    intercepts=nengo.dists.Choice([0]),

                )
                nengo.Connection(
                    input_node,
                    hidden_1.neurons,
                    synapse=None,
                    transform=nengo_dl.dists.Glorot())

                # add second layer
                hidden_2 = nengo.Ensemble(
                    n_neurons=32,
                    dimensions=1,
                    neuron_type=nengo.LIF(amplitude=0.01),
                    max_rates=nengo.dists.Choice([100]),
                    intercepts=nengo.dists.Choice([0]),
                )
                nengo.Connection(
                    hidden_1.neurons,
                    hidden_2.neurons,
                    synapse=None,
                    transform=nengo_dl.dists.Glorot())

                hidden_3 = nengo.Ensemble(
                    n_neurons=16,
                    dimensions=1,
                    neuron_type=nengo.LIF(amplitude=0.01),
                    max_rates=nengo.dists.Choice([100]),
                    intercepts=nengo.dists.Choice([0]),
                )
                nengo.Connection(
                    hidden_2.neurons,
                    hidden_3.neurons,
                    synapse=None,
                    transform=nengo_dl.dists.Glorot())

                hidden_4 = nengo.Ensemble(
                    n_neurons=32,
                    dimensions=1,
                    neuron_type=nengo.LIF(amplitude=0.01),
                    max_rates=nengo.dists.Choice([100]),
                    intercepts=nengo.dists.Choice([0]),
                )
                nengo.Connection(
                    hidden_3.neurons,
                    hidden_4.neurons,
                    synapse=None,
                    transform=nengo_dl.dists.Glorot())

                # add third layer
                hidden_5 = nengo.Ensemble(
                    n_neurons=64,
                    dimensions=1,
                    neuron_type=nengo.LIF(amplitude=0.01),
                    max_rates=nengo.dists.Choice([100]),
                    intercepts=nengo.dists.Choice([0]),
                )
                nengo.Connection(
                    hidden_4.neurons,
                    hidden_5.neurons,
                    synapse=None,
                    transform=nengo_dl.dists.Glorot())

                # add output layer
                out = nengo_dl.TensorNode(
                    tf.keras.layers.Dense(x_train.shape[1]),
                    shape_in=(64,),
                    shape_out=(x_train.shape[1],),
                    pass_time=False
                )
                nengo.Connection(
                    hidden_5.neurons,
                    out,
                    synapse=None)

                out_p = nengo.Probe(out, label="out_p")
                out_p_filt = nengo.Probe(out, synapse=0.1, label="out_p_filt")

            sim.compile(
                optimizer='adam',
                loss={out_p: tf.keras.losses.MeanSquaredError()},
            )
            sim.fit(x_train, {out_p: x_train}, epochs=10)

Well… As said before three datasets were used. The AUC-ROC performance for the anomaly detection task in each dataset was inconsistent. For each dataset 10 runs were made with different seeds.

  • Credit Card Fraud (284k instances, 29 features): 0.1973 (0.0242)
  • Thyroid (7k instances, 21 features): 0.5559 (0.0289)
  • Donors (619k instances, 10 features): 0.8706 (0.0336)

I don’t know if there is something wrong with my network, but it must have something wrong.

Additionaly, since I’m already here, is there a way for me to use Nengo core rules to optimize my network for this task? I’ve been looking into the topic, but I’m not sure if this is possible.

Hi @Kinteshi,

From your code itself, it’s hard to tell whether or not the architecture of your network is sufficient for the task at hand, nor is it possible to guess at the expected performance of the network for the given datasets. Tuning your network to achieve good results is a little bit of a black art, and with my limited experience, here are some of my suggestions:

  • Instead of starting with an SNN, use rate neurons in your network. Using rate neurons will remove the noise introduced by spikes and will help you determine whether the network architecture is actually sufficient to achieve desirable results.
  • In addition to using rate neurons, try using the nengo.RectifiedLinear neuron model. This neuron model has a more linear response curve, and this can help with the accuracy of the network.
  • I recommend reading this NengoDL example as well. Looking at your network, I suspect that the relatively low firing rate of your neurons might be an issue (100Hz with only 60ish neurons is quite noisy).

Once you have a working network implemented in rate neurons, then you can start swapping in spiking neurons and start optimizing the firing rates to achieve the desired performance. Keep in mind that you may need to add synaptic filters between layers, more neurons, or increase the firing rate to get the results you want.

I’m not entirely sure what you mean by “Nengo core rules”. If you mean by the core Nengo learning rules, I don’t think they would help in this task (although, it might be possible if you find the right error function to apply).

Hi @xchoo!

Thank you so much for your help!

I tried what you recommended, and the results got better! Better than the non-spiking version of the autoencoder. Again, I’m a little suspicious. Right now, I’m using SpikingRectfiedLinear neurons and a LowPass filter 0.01. The firing rate however I left unchanged. Increasing them or optimizing didn’t improve the results. Is it okay?

I’m not entirely sure what you mean by “Nengo core rules”. If you mean by the core Nengo learning rules, I don’t think they would help in this task (although, it might be possible if you find the right error function to apply).

Yes. The core Nengo learning rules. Okay then, I’ll keep on tweaking my networks.

Again, thank you!

If you are comparing a ReLU (rectified linear neuron) to an non-spiking LIF neuron, I’d expect the ReLU neuron (even a spiking one) to perform better than the LIF. The ReLU neuron activation function is linear, so it does a better job of approximating functions (i.e., it will generally train better).

The parameters you are using for your network are… really up to you to decide! :smiley: But to me, using spiking ReLU neurons and a lowpass synapse of 0.01s seem reasonable. As for the firing rate, it depends on what you initially set them as. If they started off in the hundreds, then increasing them probably won’t do much of a difference (a firing rate in the hundreds is already quite high).