Actually, someone has tried this before. Olivia Perryman (@olivia) worked on this at the Nengo Summer School 2016. To summarize her results, it is possible to do this in Nengo. She used MNIST digits, and the rotated digits were reasonably accurate, at least to a level that they were still recognizable.
I don’t think her model was using semantic pointers, though. She had a network with a single hidden layer of neurons, and was solving for the rotation transform at the neuron level. This was leading to some problems scaling up, since the neurons were fully connected and thus the size (number of elements) of the transform matrix was the square of the number of neurons. So one extension of her work would be to use a visual model that produces semantic pointers (like the one used in Spaun) and solve for the transform in the semantic pointer space. I have no idea whether this would work or not. At the very least, you might have to make sure your network is trained on all rotations of digits.
Here’s a repository of her code: https://github.com/oliviaperryman/Nengo-repository
I’m not sure what state it’s in (i.e. it might not run with current Nengo, it’s over a year old now). But looking through the code could give you an idea of places to start, if this is something you’re interested in modelling.