It has been shown that if you have a sequence of semantic pointers, you can find the operation (or transform) that leads from one pointer to the next. You can average all these transforms together, and then apply them to the last item in the series, to see what comes next.
I was wondering if this works with rotation. When people try to match an image with a rotated image to see if it is the same, they take longer, the more the second image has been rotated. It is as if they are running a rotation simulation in their minds. Can a sequence of rotated images be predicted in Semantic pointer theory or by Spaun? In other words, if you rotate the item by 30 degrees, and then by 60 degrees, can you predict what it will look like at 90 degrees?