Hi!
I’m researching SNN for the anomaly detection task and until now I’ve tried a few strategies across three different datasets. I’ve used the same autoencoder network architecture for my experiments and the same optimization using backpropagation. This is my code for the network:
with net:
net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([
100])
net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
net.config[nengo.Connection].synapse = None
neuron_type = nengo.LIF(amplitude=0.01)
# add input node
input_node = nengo.Node([0] * x_train.shape[1])
# add first layer
hidden_1 = nengo.Ensemble(
n_neurons=64,
dimensions=1,
neuron_type=nengo.LIF(amplitude=0.01),
max_rates=nengo.dists.Choice([100]),
intercepts=nengo.dists.Choice([0]),
)
nengo.Connection(
input_node,
hidden_1.neurons,
synapse=None,
transform=nengo_dl.dists.Glorot())
# add second layer
hidden_2 = nengo.Ensemble(
n_neurons=32,
dimensions=1,
neuron_type=nengo.LIF(amplitude=0.01),
max_rates=nengo.dists.Choice([100]),
intercepts=nengo.dists.Choice([0]),
)
nengo.Connection(
hidden_1.neurons,
hidden_2.neurons,
synapse=None,
transform=nengo_dl.dists.Glorot())
hidden_3 = nengo.Ensemble(
n_neurons=16,
dimensions=1,
neuron_type=nengo.LIF(amplitude=0.01),
max_rates=nengo.dists.Choice([100]),
intercepts=nengo.dists.Choice([0]),
)
nengo.Connection(
hidden_2.neurons,
hidden_3.neurons,
synapse=None,
transform=nengo_dl.dists.Glorot())
hidden_4 = nengo.Ensemble(
n_neurons=32,
dimensions=1,
neuron_type=nengo.LIF(amplitude=0.01),
max_rates=nengo.dists.Choice([100]),
intercepts=nengo.dists.Choice([0]),
)
nengo.Connection(
hidden_3.neurons,
hidden_4.neurons,
synapse=None,
transform=nengo_dl.dists.Glorot())
# add third layer
hidden_5 = nengo.Ensemble(
n_neurons=64,
dimensions=1,
neuron_type=nengo.LIF(amplitude=0.01),
max_rates=nengo.dists.Choice([100]),
intercepts=nengo.dists.Choice([0]),
)
nengo.Connection(
hidden_4.neurons,
hidden_5.neurons,
synapse=None,
transform=nengo_dl.dists.Glorot())
# add output layer
out = nengo_dl.TensorNode(
tf.keras.layers.Dense(x_train.shape[1]),
shape_in=(64,),
shape_out=(x_train.shape[1],),
pass_time=False
)
nengo.Connection(
hidden_5.neurons,
out,
synapse=None)
out_p = nengo.Probe(out, label="out_p")
out_p_filt = nengo.Probe(out, synapse=0.1, label="out_p_filt")
sim.compile(
optimizer='adam',
loss={out_p: tf.keras.losses.MeanSquaredError()},
)
sim.fit(x_train, {out_p: x_train}, epochs=10)
Well… As said before three datasets were used. The AUC-ROC performance for the anomaly detection task in each dataset was inconsistent. For each dataset 10 runs were made with different seeds.
- Credit Card Fraud (284k instances, 29 features): 0.1973 (0.0242)
- Thyroid (7k instances, 21 features): 0.5559 (0.0289)
- Donors (619k instances, 10 features): 0.8706 (0.0336)
I don’t know if there is something wrong with my network, but it must have something wrong.
Additionaly, since I’m already here, is there a way for me to use Nengo core rules to optimize my network for this task? I’ve been looking into the topic, but I’m not sure if this is possible.