When you’re trying to train a neural network for classification, you usually use Cross-Entropy Error instead of Mean Squared Error as a measure of the error between the desired and actual network output. When training with the NEF, we always use the Mean Squared Error.

- Has anyone tried to you use the Cross-Entropy Error instead? If not,
- How would you go about implementing it? Would you need to make your own solver or would there be some other way?