Crashing with no errors

Hi,

I’m pretty new to Nengo and I cannot find any solutions online for this. As the title say my code is crashing with no error messages and I have no clue why. I put the output and my code below.

Build finished in 0:00:00
Optimization finished in 0:00:00
Construction finished in 0:00:01
2022-03-26 13:16:21.292887: W tensorflow/core/framework/cpu_allocator_impl.cc:82] Allocation of 340623360 exceeds 10% of free system memory.
Process finished with exit code -1073740791 (0xC0000409)

# load in datasets
train = np.load("Datasets\\train.npy")
train_labels = np.load("Datasets\\train_labels.npy")
valid = np.load("Datasets\\valid.npy")
valid_labels = np.load("Datasets\\valid_labels.npy")
test = np.load("Datasets\\test.npy")
test_labels = np.load("Datasets\\test_labels.npy")

# reshape to fit Nengo requirements
train = train.reshape((train.shape[0], train.shape[1], -1))
valid = valid.reshape((valid.shape[0], valid.shape[1], -1))
test = test.reshape((test.shape[0], test.shape[1], -1))

print(train.shape)
print(valid.shape)
print(test.shape)

# define the model
with nengo.Network(seed=0) as net:

    # leaky integrate and fire
    neuron_type = nengo.LIF(amplitude=.01)

    # set input dimensions
    inp = nengo.Node(np.zeros(1280*18))

    # convolution layer with no activation
    out = nengo_dl.Layer(tf.keras.layers.Conv2D(filters=8, kernel_size=(64, 18), strides=(32, 1)))(
        input=inp, shape_in=(1280, 18, 1)
    )

    # average pooling layer
    out = nengo_dl.Layer(tf.keras.layers.AveragePooling2D(pool_size=(19, 1)))(
        input=out, shape_in=(39, 1, 8)
    )


    # spiking activation after average pool, may not be required here
    out = nengo_dl.Layer(neuron_type)(
        input=out
    )

    # first fc layer
    out = nengo_dl.Layer(tf.keras.layers.Dense(units=16))(
        input=out
    )

    # spiking activation after first fc
    out = nengo_dl.Layer(neuron_type)(
        input=out
    )

    # final fc layer
    out = nengo_dl.Layer(tf.keras.layers.Dense(units=2))(
        input=out
    )

    # probes user for grabbing the outputs
    out_probe = nengo.Probe(out, label="out probe")
    out_probe_filter = nengo.Probe(out, synapse=0.1, label="out probe with filter")

sim = nengo_dl.Simulator(net)

sim.compile(
    optimizer=tf.optimizers.RMSprop(lr),
    loss={out_probe: tf.losses.SparseCategoricalCrossentropy(from_logits=True)}
)

checkpoint = tf.keras.callbacks.ModelCheckpoint(
    filepath="model",
    monitor="val_accuracy",
    save_best_only=True,
    mode="max",
)

sim.fit(x=train, y={out_probe: train_labels},
        epochs=1,
        validation_data=({inp: valid}, {out_probe: valid_labels}),
        callbacks=[checkpoint])

sim.close()

Hi @Ricko, and welcome to the Nengo forums! :smiley:

The message you are seeing (Allocation of ... exceeds 10% of free system memory) is an indication that your computer may not have enough memory to do the training for that specific NengoDL model. Under the hood, NengoDL uses TensorFlow to do the training, and a quick google of that message suggests that you can try to reduce the batch_size parameter in the fit function call. In NengoDL, the sim.fit function call is a wrapper for the TensorFlow’s model.fit call, so you should be able to set the batch_size with:

sim.fit(x=train, y={out_probe: train_labels},
        epochs=1,
        batch_size=4,
        validation_data=({inp: valid}, {out_probe: valid_labels}),
        callbacks=[checkpoint])

From the TensorFlow documentation, it seems like if no number is specified, TensorFlow will default to a value of 32 for the batch_size. In the code above, I chose a batch_size of 4 just to demonstrate how to change it. You’ll need to experiment on your own system to find a number that works best for you. :slight_smile: