Implementing Regression Model

I am tring to implement a regression problem which I will eventually implement on Loihi. I started with the MNIST classification problem example and tried to modify the compile section.

I replaced the RMSprop(0.001), SparseCategoricalCrossentropy, and sparse_categorical_accuracy with
Adam(0.001), MeanSquaredError, and Accuracy (as I used in Neural network model)

Unfortunately I am getting 0% accuracy unless I convert it as classification problem

Input: 4 x 3 and Output: 1 x 3

My code is attached below:

import warnings                 
import matplotlib.pyplot as plt
import nengo
import nengo_dl
import nengo_loihi
import numpy as np
import tensorflow as tf
from tensorflow.keras.models import Sequential, clone_model
from tensorflow.keras.layers import Input,Dense, Dropout, Activation, Flatten,Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.datasets import mnist
from tensorflow.keras.utils import normalize
from urllib.request import urlretrieve
import pickle

# ignore NengoDL warning about no GPU
warnings.filterwarnings("ignore", message="No GPU", module="nengo_dl")
np.random.seed(0)
tf.random.set_seed(0)
num_classes = 3
# load dataset 
train_X= np.array([[[ 0.,  0.,  1.],
                    [ 0.,  1.,  1.],
                    [ 0.,  2.,  1.],
                    [ 1., -1.,  0.]],
                   [[ 0.,  0.,  1.],
                    [ 0.,  1.,  1.],
                    [ 0.,  2.,  1.],
                    [ 2., -1.,  0.]],
                   [[ 0.,  0.,  1.],
                    [ 0.,  1.,  1.],
                    [ 0.,  2.,  1.],
                    [ 3., -1.,  0.]]])
train_Y = np.array([[0.029, 0.059, 0.079],
                    [0.298, 0.985, 0.546],
                    [0.854, 0.911, 0.405]])
# Label modification like Mnist  
labels = []
for i in range(train_Y.shape[0]):
    output = np.argmax(train_Y[i])
    labels.append(int(output))
labels = np.array(labels)
train_images_R = train_X.reshape((train_X.shape[0],train_X.shape[1]*train_X.shape[2])) #NO need
train_labels_R = train_Y.copy() #NO need
train_Images = train_X.reshape((train_X.shape[0], 1, -1))  # 3,4,3 => 25,1,12 
train_Labels = train_Y.reshape((train_Y.shape[0], 1, -1))  # 3,3 => 3,1,3
train_Labels_RM = labels.reshape((labels.shape[0], 1, -1)) # 3, => 3,1,1 

def modelDef():
    inp = tf.keras.Input(shape=(4, 3, 1), name="input")
    # transform input signal to spikes using trainable 1x1 convolutional layer
    to_spikes_layer = tf.keras.layers.Conv2D(
        filters=3,  # 3 neurons per pixel
        kernel_size=1,
        strides=1,
        activation=tf.nn.relu,
        use_bias=False,
        name="to-spikes",
    )
    to_spikes = to_spikes_layer(inp)
    # on-chip layers
    flatten = tf.keras.layers.Flatten(name="flatten")(to_spikes)
    dense0_layer = tf.keras.layers.Dense(units=10, activation=tf.nn.relu, name="dense0")
    dense0 = dense0_layer(flatten)
    # since this final output layer has no activation function,
    # it will be converted to a `nengo.Node` and run off-chip
    dense1 = tf.keras.layers.Dense(units=num_classes, name="dense1")(dense0)
    
    model = tf.keras.Model(inputs=inp, outputs=dense1)
    model.summary()
    return model
    
def train(params_file="./keras_to_loihi_params123", epochs=1, **kwargs):
    model = modelDef() 
    miniBatch = 3
    converter = nengo_dl.Converter(model, **kwargs)

    with nengo_dl.Simulator(converter.net, seed=0, minibatch_size=miniBatch) as sim:        
        '''
        Followed MNIST classification structure
        '''
        # sim.compile(
        #     optimizer=tf.optimizers.RMSprop(0.001),
        #     loss= {
        #         converter.outputs[model.get_layer('dense1')]: tf.losses.SparseCategoricalCrossentropy(
        #               from_logits=True
        #           )
        #       },
        #     metrics={converter.outputs[model.get_layer('dense1')]: tf.metrics.sparse_categorical_accuracy},
        #     )
        # sim.fit(
        #     {converter.inputs[model.get_layer('input')]: train_Images},
        #     {converter.outputs[model.get_layer('dense1')]: train_Labels_RM},
        #     epochs=epochs,
        # )
        '''
        Not followed MNIST structure
        '''
        sim.compile(
            optimizer=tf.optimizers.Adam(0.001),
            loss= {
                converter.outputs[model.get_layer('dense1')]: tf.losses.MeanSquaredError()
                },
            metrics={converter.outputs[model.get_layer('dense1')]: tf.metrics.Accuracy()},
            )
        sim.fit(
            {converter.inputs[model.get_layer('input')]: train_Images},
            {converter.outputs[model.get_layer('dense1')]: train_Labels},
            epochs=epochs,
        )            
# train this network with normal ReLU neurons
train(
    epochs=2000,
    swap_activations={tf.nn.relu: nengo.RectifiedLinear()},
)

Could you please suggest me how to fix the issue (compile section)?

This is because accuracy only makes sense in the context of a classification problem. It compares how often your predictions equal your targets. Since you’re doing regression, your predictions will never be exactly equal to the targets.

I would recommend using tf.keras.metrics.MeanSquaredError() for your metric (or getting rid of the metrics entirely, since if you use MeanSquaredError, it will just be the same as your loss).

Thanks for your suggestion. I trained the network, but it is not able to generate results accurately. During the training (epochs = 2000), the model converged (loss = 0).

Epoch 999/1000
3/3 [==============================] - 0s 3ms/step - loss: 4.4954e-12 - probe_loss: 4.4954e-12 - probe_mean_squared_error: 4.4954e-12
Epoch 1000/1000
3/3 [==============================] - 0s 3ms/step - loss: 4.7453e-12 - probe_loss: 4.7453e-12 - probe_mean_squared_error: 4.7453e-12

But when I tested the model, it could not generate desired results even with the trained data. For the first sample, model generated outputs are

Predict Outcome is: [[0.18543294 0.59750223 0.35057712]]
Actual Outcome is:  [[0.029 0.059 0.079]]

Code used for the testing is shown below. I also run the test on loihi cloud.

run_network(
    activation=nengo.SpikingRectifiedLinear(),
    scale_firing_rates=100,
    synapse=0.005,)
run_network(
    activation=nengo_loihi.neurons.LoihiSpikingRectifiedLinear(),
    scale_firing_rates=100,
    synapse=0.005,
)
run_network_onLoihi()

    def run_network(
        activation,
        params_file="./keras_to_loihi_params123",
        n_steps=30,
        scale_firing_rates=1,
        synapse=None,
        n_test=10, # only predict one sample, so no need of it.
        # test image, minibatch
    ):
        model = modelDefSmall_2()
        sample = 0 # temporary use for test sample creation
        # convert the keras model to a nengo network
        nengo_converter = nengo_dl.Converter(
            model,
            swap_activations={tf.nn.relu: activation},
            scale_firing_rates=scale_firing_rates,
            synapse=synapse,
        )
        # get input/output objects
        nengo_input = nengo_converter.inputs[model.get_layer('input')]
        nengo_output = nengo_converter.outputs[model.get_layer('dense1')]
        
        # test tiled input (for prediction)
        test_inputs = train_Images[sample]
        tiled_test_images = np.tile(test_inputs, (1, n_steps, 1))
        # set some options to speed up simulation
        with nengo_converter.net:
            nengo_dl.configure_settings(stateful=False)
    		
        # build network, load in trained weights, run inference on test images
        with nengo_dl.Simulator(
            nengo_converter.net, minibatch_size=1, progress_bar=False
        ) as nengo_sim:
            nengo_sim.load_params(params_file) 
            data = nengo_sim.predict({nengo_input: tiled_test_images})

        # compute accuracy on test data, using output of network on
        # last timestep
        predictions = data[nengo_output][:, -1]
        print("Predict Outcome is: {}".format(predictions))
        #predictions_max = np.argmax(data[nengo_output][:, -1], axis=-1)
        predictions_max = np.argmax(predictions)
        print("Predict Label is: {}".format(predictions_max))
        #print("Actual Label is:  {}".format(train_Labels[:n_test, 0, 0]))
        print("Actual Outcome is:  {}".format(train_Labels[sample]))
        print("Actual Label is:  {}".format(np.argmax(train_Labels[sample])))

Can anyone suggest ( or sample example) how to improve the accuracy in regression-based implementation? The model shows converging results during training but not able to generate accurate results during testing.

Hi @nayimrahman,

As @Eric suggested, you should:

If you have that working, and it seems like using the mean squared error does improve the training loss, then the issue might be something else. It’s hard to debug deep learning models without knowing the full details of the mode and the dataset characteristics, and even then, trying to figure out what is wrong with a DL model is a little bit of a black art.

Generally, when trying to debug models, you’d want to remove as much complexity from the system first, getting that to work, and then start slowly adding complexity in. In this case, since your model is a Keras model, I’d recommend first testing to see if the training and prediction gives satisfactory results in TensorFlow. Once you get it to train and predict will in TensorFlow, then start experimenting NengoDL, but don’t use spiking neurons yet. Stick with the rate-based ReLu neurons until you get decent performance, then move on to the spiking neurons.

Using this method of debugging, you’ll at least get a sense of what exactly is causing the performance issues with your model. :slight_smile:

I followed your code but I got this error

AttributeError: type object 'function' has no attribute 'from_config' from converter = nengo_dl.Converter(modelDef, **kwargs)

train  (8672, 1, 17),  test (964, 1, 17), train labels((8672, 1, 1) and test labels (964, 1, 1)
def modelDef():
  inp = tf.keras.Input(shape=(1,8672,1,), name="input")
  to_spikes_layer = tf.keras.layers.Conv2D(

          filters=3,  # 3 neurons per pixel
          kernel_size=1,
          strides=1,
          activation=tf.nn.relu,
          use_bias=False,
          name="to-spikes",
      )

  to_spikes = to_spikes_layer(inp)
  flatten = tf.keras.layers.Flatten(name="flatten")(to_spikes)
  dense0_layer = tf.keras.layers.Dense(units=10, activation=tf.nn.relu, name="dense0")
  dense0 = dense0_layer(flatten)
  # since this final output layer has no activation function,
  # it will be converted to a `nengo.Node` and run off-chip
  dense1 = tf.keras.layers.Dense(units=num_classes, name="dense1")(dense0)

  model = tf.keras.Model(inputs=inp, outputs=dense1)
  model.summary()

  return model

def train(params_file="./keras_to_loihi_params", epochs=1, **kwargs):
    converter = nengo_dl.Converter(modelDef, **kwargs)
    with nengo_dl.Simulator(converter.net, seed=0, minibatch_size=200) as sim:
        sim.compile(
            optimizer=tf.optimizers.RMSprop(0.001),
            loss={
                converter.outputs[dense1]: tf.losses.SparseCategoricalCrossentropy(
                    from_logits=True
                )
            },
            metrics={converter.outputs[dense1]: tf.metrics.sparse_categorical_accuracy},
        )
        sim.fit(
            {converter.inputs[inp]: train},
            {converter.outputs[dense1]: train_Labels},
            epochs=epochs,
        )
        # save the parameters to file
        sim.save_params(params_file)

train(
    epochs=2,
    swap_activations={tf.nn.relu: nengo.RectifiedLinear()},
)

Hi @ssp ,

There doesn’t seem anything wrong in that particular part of the code that you attached, so I would assume that the error originates from some other part of the code. Unfortunately, without the full code to run, I will be unable to speculate as to the exact cause of this error. If you could post the script, or jupyter notebook, or a minimal version of the code that exhibits this behaviour, that would be much more helpful.

I have attached the code. PFA
I want to do the regression task. I built the model but when I tried to simulate using

# Train with NengoDL simulator
with nengo_dl.Simulator(net, minibatch_size=100) as sim:
    
    sim.compile(loss={out_p_filt: classification_accuracy})
print(
    "Accuracy after training:",
    sim.evaluate(test_images, verbose=0)["loss"],
)

But I got this errortest_simulation.py (4.6 KB) SimulatorClosed: Cannot call evaluate after simulator is closed

Hello, @xchoo thanks for responding.
I simulated this code again, now it worked after changing the loss and training dimensions But I am bit confused on “how to interpret these results?”
I predicted the muscle activities for 8 different targets but in general could you guide me on how to interpret it ?


MSE Nengo run net prediction of 1200 samples: 3.9005853530684655