SNN model training

I am trying out this tutorial (Classifying Fashion MNIST with spiking activations — KerasSpiking 0.3.0 docs), but the only difference is that, I am using CIFAR-10 dataset and my netrowk architeture is different. After generating the spiking model, I am facing some issues in the training. Below, I have attached the error:

ValueError                                Traceback (most recent call last)
Input In [70], in <cell line: 1>()
----> 1 spiking_model.fit(train_sequences, test_sequences, epochs=2, verbose=2)

File ~/miniconda3/envs/tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
     65 except Exception as e:  # pylint: disable=broad-except
     66   filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67   raise e.with_traceback(filtered_tb) from None
     68 finally:
     69   del filtered_tb

File ~/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/data_adapter.py:1655, in _check_data_cardinality(data)
   1651   msg += "  {} sizes: {}\n".format(
   1652       label, ", ".join(str(i.shape[0])
   1653                        for i in tf.nest.flatten(single_data)))
   1654 msg += "Make sure all arrays contain the same number of samples."
-> 1655 raise ValueError(msg)

ValueError: Data cardinality is ambiguous:
  x sizes: 50000
  y sizes: 10000
Make sure all arrays contain the same number of samples.

x_train and x_test are numpy arrays.

This is the way I have created train and test sequences:

# repeat the images for n_steps
n_steps = 10
train_sequences = np.tile(train_images[:, None], (1, n_steps, 1, 1, 1))
test_sequences = np.tile(test_images[:, None], (1, n_steps, 1, 1, 1))

It looks like the number of training labels (i.e. your y size), doesn’t match the number of training images. Based on the traceback, it looks like you’re calling spiking_model.fit wrong. See the signature here. You need to do something like spiking_model.fit(train_sequences, train_labels, validation_data=(test_sequences, test_labels), ...).

Thanks Eric for your response. I was able to solve that issue, but now I am getting another error. I have provided the error below.

Epoch 1/2
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [19], in <cell line: 1>()
----> 1 spiking_model.fit(train_sequences, train_labels, validation_data=(test_sequences, test_labels), epochs=2, verbose=2)

File ~/miniconda3/envs/tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py:67, in filter_traceback.<locals>.error_handler(*args, **kwargs)
     65 except Exception as e:  # pylint: disable=broad-except
     66   filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67   raise e.with_traceback(filtered_tb) from None
     68 finally:
     69   del filtered_tb

File /tmp/__autograph_generated_filej8odifcy.py:15, in outer_factory.<locals>.inner_factory.<locals>.tf__train_function(iterator)
     13 try:
     14     do_return = True
---> 15     retval_ = ag__.converted_call(ag__.ld(step_function), (ag__.ld(self), ag__.ld(iterator)), None, fscope)
     16 except:
     17     do_return = False

ValueError: in user code:

    File "/home/saurav/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1051, in train_function  *
        return step_function(self, iterator)
    File "/home/saurav/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1040, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/home/saurav/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 1030, in run_step  **
        outputs = model.train_step(data)
    File "/home/saurav/miniconda3/envs/tf/lib/python3.9/site-packages/keras/engine/training.py", line 889, in train_step
        y_pred = self(x, training=True)
    File "/home/saurav/miniconda3/envs/tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
        raise e.with_traceback(filtered_tb) from None
    File "/home/saurav/miniconda3/envs/tf/lib/python3.9/site-packages/keras/layers/reshaping/reshape.py", line 108, in _fix_unknown_dimension
        raise ValueError(msg)

    ValueError: Exception encountered when calling layer "reshape_1" (type Reshape).
    
    total size of new array must be unchanged, input_shape = [40960], output_shape = [-1, 32, 32, 3]
    
    Call arguments received by layer "reshape_1" (type Reshape):
      • inputs=tf.Tensor(shape=(None, 40960), dtype=float32)

From the example I did try constructing a spiking CNN but not sure whether I did it properly or not, therefore could you please provide me some feedback ?

My CNN network:

model = Sequential()

model.add(Conv2D(input_shape=(32,32,3),filters=4,kernel_size=(1,1), strides=(1, 1), padding="valid", activation='relu'))

model.add(Conv2D(filters=64,kernel_size=(3,3), strides=(2, 2), activation='relu', padding="same"))

model.add(Conv2D(filters=72,kernel_size=(3,3), strides=(1, 1), activation='relu', padding="same"))

model.add(Conv2D(filters=256,kernel_size=(3,3), strides=(2, 2), activation='relu', padding="same"))

model.add(Conv2D(filters=256,kernel_size=(1,1), strides=(1, 1), activation='relu', padding="same"))

model.add(Conv2D(filters=64,kernel_size=(1,1), strides=(1, 1), activation='relu', padding="same"))

model.add(Flatten())
model.add(Dense(units=100,activation="relu"))
model.add(Dense(units=10, activation="softmax"))

My spiking CNN:

spiking_model = Sequential()

spiking_model.add(Reshape((-1,32,32,3), input_shape=(None, 32, 32, 3)))

spiking_model.add(Conv2D(input_shape=(32,32,3),filters=4,kernel_size=(1,1), strides=(1, 1), padding="same"))

keras_spiking.SpikingActivation("relu", spiking_aware_training=False)

spiking_model.add(Conv2D(filters=64,kernel_size=(3,3), strides=(2, 2), padding="same"))

keras_spiking.SpikingActivation("relu", spiking_aware_training=False)

spiking_model.add(Conv2D(filters=72,kernel_size=(3,3), strides=(1, 1), padding="same"))

keras_spiking.SpikingActivation("relu", spiking_aware_training=False)

spiking_model.add(Conv2D(filters=256,kernel_size=(3,3), strides=(2, 2), padding="same"))

keras_spiking.SpikingActivation("relu", spiking_aware_training=False)

spiking_model.add(Conv2D(filters=256,kernel_size=(1,1), strides=(1, 1), padding="same"))

keras_spiking.SpikingActivation("relu", spiking_aware_training=False)

spiking_model.add(Conv2D(filters=64,kernel_size=(1,1), strides=(1, 1), padding="same"))

keras_spiking.SpikingActivation("relu", spiking_aware_training=False)

spiking_model.add(Flatten())

spiking_model.add(Reshape((-1,32,32,3), input_shape=(None, 32, 32, 3)))

spiking_model.add(TimeDistributed(Dense(units=100)))

keras_spiking.SpikingActivation("relu", spiking_aware_training=False)

spiking_model.add(Dense(units=10, activation="softmax"))

Code for train and test sequences:

# repeat the images for n_steps
n_steps = 10
train_sequences = np.tile(train_images[:, None], (1, n_steps, 1, 1, 1))
test_sequences = np.tile(test_images[:, None], (1, n_steps, 1, 1, 1))
train_sequences.shape --> (50000, 10, 32, 32, 3)
test_sequences.shape --> (10000, 10, 32, 32, 3)