Neural backup for semantic pointer theory?


I just saw a YouTube video by a researcher at Carnegie Mellon (Tom Mitchell) who used a vector that represented words as an input, and correlated them with voxels on a f-MRI of the brain as it was presented with the word. For instance, if you present a person with an image of a hand, and the word “hand”, and watch his MRI, you get a movie of voxels going on and off, voxels that might correlate initially with simple aspects such as the number of letters word is, but eventually correspond to meaningful attributes of the word. They started with that simple idea, but then they progressed to phrases, of type adjective-noun, and they tried to see if the pattern of the adjective could be decoded even when the noun was presented (it could be, but with less accuracy) and if it could be decoded after the phrase had been presented (it could, for a while). Then they went on to sentences.
I didn’t understand all their findings but this sounds like a good test of semantic-pointer theory.

The URL is (