I was looking at the heteroassociative memory example here and have a few questions.
I’ve noticed that there are two learning rules, Voja and PES, which are used to learn new associations. It appears that the Voja is modifying the encoders such that the intercepts/preferred direction are optimal for the keys which then, from those specific keys, the decoders compute the weight transform needed to get the values learned through PES. Is my understanding of this correct?
So here are the Voja rules are learning the keys in a sense. Where it is modifying the encoders to cluster together the inputs (keys) and from that cluster the decoder maps that cluster to a value using PES? When learning the decoders, how does the weight change not destructively influence previously learned key-value mappings?
Could some additional insight be provided into this:
An important quantity is the largest dot-product between all pairs of keys, since a neuron’s intercept should not go below this value if it’s positioned between these two keys. Otherwise, the neuron will move back and forth between encoding those two inputs.