Neural approaches to vector normalisation?

When you add two vectors together, such as DOG+CAT you often get a vector who’s magnitude exceeds 1. What neural options are there for normalization? I remember @xchoo doing a presentation on this at one point. I seem to remember the best bet was with the soft-normalisation performed by neural ensembles anyways.

  • Soft normalization with neurons, but depends on your subdimensions because only each segment of subdimensions dimensions will be effected and this might effect different vectors differently.
  • Calculating the length and dividing by it. This can work well if the range of encountered vector lengths is small and does not include values close to 0.
  • Subtracting a scaled version of the vector from itself when it exceeds a length of 1. This does not normalize shorter vectors and does not lead to immediate normalization, but will take some time. Depending on the external input, the steady state will be above 1 (but on a memory, the input could be inhibited and then the length will be adjusted to 1 during the storage period). I have a Jupyter notebook implementing this approach, but the forum does not allow me to upload it.

I added ipynb and py to the list of allowed extensions, so it should be possible now.

Here is the notebook:
Vector normalization using dynamics.ipynb (297.0 KB)

1 Like