Hello everyone!
Edit: Please refer to Questions below to begin the discussion straightforward. I have moved the code after the questions which can be used as an example to reproduce the warnings.
I am building a TF-Keras ConvNet with layers having kernel regularizers, max pooling and also wish to add dropout later. I am also using a loss function defined in tensorflow_addons
, i.e. not the ones defined tensorflow
. Also, please note that my end goal is to run such a network entirely on nengo-loihi.
Questions:
1> The warning UserWarning: Layer '<class 'tensorflow. ... Overwriting." % keras_layer
every time I import nengo_dl
seems innocuous to me. Is it?
2> If I set max_to_avg_pool = True
, it degrades the performance of the network (compared to TF-Keras network) very much. Please note that I am using nengo.SpikingRectifiedLinear() while conversion. My question is,
a) Will such a nengo-dl network with max_to_avg_pool = False
run entirely on loihi with spiking neurons?
b) Or will some parts of the network (i.e. MaxPooling) run on CPU/GPU due to the part being a TensorNode?
c) If it will partially run on loihi, then how can I get it running entirely on loihi? Do I train my TF-Keras network with no MaxPooling layer, rather with AveragePooling3D to compare the performance (assuming that AveragePooling3D runs on loihi with spiking neurons)?
3> If I set inference_only = True
, the warnings UserWarning: conv3d.kernel_regularizer ... if error_msg else "")
disappear but it again degrades the performance of the network compared to TF-Keras. It is probably because when inference_only = False
, the neurons are still RectifiedLinear
(even after conversion with SpikingRectifiedLinear
) and thus better performing than SpikingRectifiedLinear
. Is it? Thus, if inference_only = False
, then all the layers are still TensorNodes and I guess… they won’t be running on Nengo-Loihi with spiking neurons.
4> If I include dropout in between dense layer and output layer, nengo warns UserWarning: Layer type <class 'tensorflow.python.keras.layers.core.Dropout'> does not have a registered converter. Falling back to TensorNode. % (error_msg + ". " if error_msg else "")
=> dropout layer is not supported. Is it? If I run such a net on nengo-loihi, will it have the same implication of partially running on loihi and partially on CPU/GPU?
5> If I happen to train the converted nengo-dl network (with tensorflow_addons
loss function), it fails (I don’t remember exactly) due to no support of tensorflow_addons
loss functions. I guess I have to declare my own custom loss function… is it?
Following is my code:
import nengo
import nengo_dl
import tensorflow as tf
import tensorflow_addons as tfa
def _get_cnn_block(conv, num_filters, ker_params, include_pooling=True,
rf=5e-5, pool_depth=2):
conv = tf.keras.layers.Conv3D(
num_filters, ker_params, padding="same", data_format="channels_last",
activation='relu', kernel_initializer='he_uniform',
kernel_regularizer=tf.keras.regularizers.l2(rf))(conv)
if include_pooling:
conv = tf.keras.layers.MaxPool3D(
pool_size=(pool_depth, 2, 2), data_format="channels_last")(conv)
return conv
def _get_dense_block(block, nn_dlyr, actvn="relu", rf=5e-5):
dense = tf.keras.layers.Dense(
nn_dlyr, activation=actvn, kernel_initializer="he_uniform",
kernel_regularizer=tf.keras.regularizers.l2(rf))(block)
return dense
def get_3d_cnn_model(inpt_shape, num_neurons_dlyr, num_clss, lr, rf):
inpt = tf.keras.Input(shape=inpt_shape)
conv0 = _get_cnn_block(inpt, 64, (3, 3, 3), pool_depth=1, rf=rf)
conv1 = _get_cnn_block(conv0, 128, (3, 3, 3), pool_depth=2, rf=rf)
flat = tf.keras.layers.Flatten(data_format="channels_last")(conv1)
dense0 = _get_dense_block(flat, num_neurons_dlyr, rf=rf)
output = _get_dense_block(dense0, num_clss, actvn="softmax", rf=rf)
model = tf.keras.Model(inputs=inpt, outputs=output)
model.compile(
optimizer=tf.keras.optimizers.Adam(lr=lr),
loss=tfa.losses.focal_loss.sigmoid_focal_crossentropy,
metrics=["accuracy"])
return model
inpt_shape = (16, 36, 64, 3)
model = get_3d_cnn_model(inpt_shape, 2048, 12, 1e-4, 5e-5)
nengo_model = nengo_dl.Converter(
model, swap_activations={tf.keras.activations.relu: nengo.SpikingRectifiedLinear()},
scale_firing_rates=10, synapse=0.005,
max_to_avg_pool=False, inference_only=False)
While importing nengo-dl I get the following warning:
UserWarning: Layer '<class 'tensorflow.python.keras.layers.normalization_v2.BatchNormalization'>' already has a converter. Overwriting.
"Layer '%s' already has a converter. Overwriting." % keras_layer
And after converting the TF-Keras network to Nengo-DL type model, I get the following warnings,
UserWarning: conv3d.kernel_regularizer has value <tensorflow.python.keras.regularizers.L1L2 object at 0x7f3f6d700e10> != None, which is not supported (unless inference_only=True). Falling back to TensorNode.
% (error_msg + ". " if error_msg else "")
UserWarning: Cannot convert max pooling layers to native Nengo objects; consider setting max_to_avg_pool=True to use average pooling instead. Falling back to TensorNode.
% (error_msg + ". " if error_msg else "")
UserWarning: conv3d_1.kernel_regularizer has value <tensorflow.python.keras.regularizers.L1L2 object at 0x7f3f6d512d50> != None, which is not supported (unless inference_only=True). Falling back to TensorNode.
% (error_msg + ". " if error_msg else "")
UserWarning: dense.kernel_regularizer has value <tensorflow.python.keras.regularizers.L1L2 object at 0x7f3f6c00aad0> != None, which is not supported (unless inference_only=True). Falling back to TensorNode.
% (error_msg + ". " if error_msg else "")
UserWarning: dense_1.kernel_regularizer has value <tensorflow.python.keras.regularizers.L1L2 object at 0x7f3f6bff27d0> != None, which is not supported (unless inference_only=True). Falling back to TensorNode.
% (error_msg + ". " if error_msg else "")
and I have few questions related to them. Please clarify.
(Note: As you can see in the code, I am not setting max_to_avg_pool
and inference_only
to True
, and hence the warnings.)
Please correct me if I am wrong anywhere and let me know your suggestions in the light of running the entire network on loihi. Thanks!