When you bind vector A with vector B, you are convolving the the two, which is a process that includes multiplication of elements. As you keep binding more concepts A to B to C to D… your elements in your vector could conceivably get larger and larger. (unless some are fractional or zero).
In the real world, neurons have maximum firing rates, and post-synaptic potentials have maximums too. So I was wondering if there is a ‘saturation’ to the elements in these vectors - they can’t go above a certain maximum. Secondly, is there a normalization of vectors - (the total magnitude of the vector can’t go above some value).
My second question is what the rationale was in selecting convolution. I understand that it has 3 desirable properties:
- it produces a product that is far away from each of its constituents
- if a vector representing ‘pink’ is similar to a vector representing ‘red’ and ‘green’ is less similar, then ‘red’ convoluted with vector A will be more similar to ‘pink’ convoluted with vector A than ‘green’ convoluted with vector A
- You can invert vector ‘pink’ and multiply it by the ‘pink-A’ to get just ‘A’ back again.
So these are all advantages.
But is there any theoretical (not just practical) rationale for choosing ‘convolution’?