NEF for classification: choosing a different cost function?

When you’re trying to train a neural network for classification, you usually use Cross-Entropy Error instead of Mean Squared Error as a measure of the error between the desired and actual network output. When training with the NEF, we always use the Mean Squared Error.

  1. Has anyone tried to you use the Cross-Entropy Error instead? If not,
  2. How would you go about implementing it? Would you need to make your own solver or would there be some other way?

I’m not aware of anyone using cross-entropy error, and I’m also not aware of anyone who’s done NEF-style optimization for direct classification. It’s very likely possible, but yeah, why classify when you can regress?

One way to do something similar with Nengo now would be to basically output 0 or 1 for classified / not classified as a dimension for each possible class. That’s what I did in this paper to get the classification accuracy metrics in Table 3.

why classify when you can regress

I don’t think I understand the difference between these mathematically, which might be the core of my problem.

One way to do something similar with Nengo now would be to basically output 0 or 1 for classified / not classified as a dimension for each possible class.

Yeah, I’ve been using one-hot encoding vectors for my classification experiments.

Eric has been doing exactly these things for MNIST for fun :slight_smile: I’d recommend talking to him.

FYI, Eric’s code for classifying MNIST can be found in the nengo_examples repository.