I have finished reading an article on NEF and SPA, and since I don’t want to bombard this forum with questions without doing as much reading as possible first, this is my last set of questions for a while. Some of these questions are not precise at all, but any answer is appreciated.
Questions:

There seem to be 3 ways of combining vectors: summing them, binding them, and doing a dotproduct of them. Binding and summing together make a structure. You can interrogate that structure for its parts by binding it to an inverse vector. But without that inverse, are there similarities of the resulting vector to the concepts that make it up? And if that structure has the same number of dimensions as the components that make it up, is it losing much of the components information?

Suppose we want to know if a letter is in the visual field of Spaun. According to the article, we could create a generic concept for letter, by adding the vectors of all 26 letters in the English alphabet. This sum is supposedly similar to any one of those letters, so if you take that sum of vectors, and then do a dot product with whatever is currently in the visual system, you will get a measure of similarity to the concept of ‘letter’.
This doesn’t seem to make sense with low dimensional vectors. For instance, a 5 bit vector could represent any one of 26 letters, but adding all of them would give a sum that was not particularly similar to any one of them. Is the high dimension space of the vectors you use the reason why the sum is similar to each of its parts? 
If we have a sequence where vector P1 leads to P2, and P2 leads to P3, does that mean that there always exists a transformation T such that P1 bound to T gives P2? Before this example was given in the article, ‘binding’ meant associating syntax roles with word vectors using circular convolution. But in this example, Neither P1 nor T needs to be a word, or a syntax role. So binding can be an abstract relation that leads from one step in a pattern to another, or it can be an arbitrary way of creating a syntax role for a concept? What else could it be used for?

I didn’t understand the basal ganglia example. Suppose you have 5 alternative actions you are considering. The one you selected stops inhibiting the basal ganglia. But the other four are in memory too, and they are still inhibiting the basal ganglia. what am I missing?

A general question: What is it about the low pass filter of the spikes that arrive at the dendrites that allow for creating oscillators and attractors?
finally:
6. where are the course notes? The course is not offered online, I assume…
Thanks.