Working with sequential data in Nengo is quite different from the approaches used in deep learning. Many of the operations done by LSTMs and GRUs are impractical to do with the types of neuron models we typically use, and even if they could be done, they are biologically implausible – Nengo attempts to respect biological constraints when possible.
So, the work previously done with sequences tends to be done at a higher symbol-like level rather than at a neural level. We use an architecture called the SPA to work with symbol-like representations in spiking neural networks.
We have a basic example of following a sequence here, which you could combine with an associative memory to map between categories. However, in making this model you would be prescribing the sequences and relationships a priori. Learning the sequences and relationships is something that we haven’t done, and would be a research project that would take several months, I suspect. A similar model learning the relationships between sequential symbols can be found in Rasmussen & Eliasmith, 2014. In that model, they take the average difference between items in the sequence to get a vector that can then be used to generate further items in that sequence. That’s an oversimplification of course; please check out the paper for a better description.
Hope that helps! Again, if you’re more interested in using deep learning type models in Nengo, you can basically embed a TensorFlow model within a Nengo model with Nengo DL, which can give you the best of both worlds and let you use LSTMs and GRUs inside Nengo networks.