Legendre memory unit (LMU) for normal MNIST

Hi, all

I wanted to know is it possible to train the proposed model on normal MNIST dataset without permutation? and how is it possible?

Yours Sincerely,

Hi Ehsan,

Welcome to the Nengo forums! :smiley:

Yes! You can do this on the regular MNIST dataset. The goal of this model is to be able to classify a sequence of pixels (that make up the MNIST digit) into the appropriate class. To do this, the input 28 x 28 image is flattened, then fed to the network one pixel at a time. The “permutation” refers to the order of the pixels. For this network, the pixel order is randomly shuffled to demonstrate that the the network is able to learn the correlation between the pixels across the entire pixel sequence (i.e., it needs to have a long memory span to do this). If the pixels are not shuffled, it is possible that the network need only learn the correlation between pixels close together (i.e., the network only needs a short memory span) in order to make the appropriate classification.

You can disable the pixel order permutation by commenting out the following lines from the code:

perm = rng.permutation(train_images.shape[1])
train_images = train_images[:, perm]
test_images = test_images[:, perm]

Thank you a lot!!
actually I already have tried that but I also should have assigned the “do_training” variable “True”. :sweat_smile: :sweat_smile:
and I actually wanted to examine the LMU potential on image classification because it needs to learn spatial properties as well and it must be a simpler task than psMNIST.

I would say that if you wanted to test the LMU on a spatial MNIST task, you’ll need to restructure the input to present the MNIST image as a whole instead of as a sequence of pixels. The flattening of the image into a sequence transforms the problem from a spatial problem to a temporal problem, and the LMU excels at temporally dependent tasks.