Post-training quantization of a nengoDL model

Hello nengoDL community,

I would like to know how can we perform the post-training quantization on a trained model such as the model trained in the MNIST tutorial. Normally TFLiteConverter is used for quantization of any ANN network. Can we do something similar with the nengodl model?

Thank you for your answer in advance. :slight_smile: