Hi @6011304 and welcome to the Nengo forums!
It’s hard to say if what you are attempting to achieve is doable in NengoDL without seeing some example code, but NengoDL is quite flexible, so here are some suggestions!
Creating the Model with Custom Weights
In Nengo and NengoDL, you can specify the weights to use when creating connections. Assuming the weights aren’t modified (either through NengoDL training, or some online learning rule), if you create connections using binary values, they will effectively behave like binary weights even if they are being represented with floating point values.
Modifying the Weights Directly
If the binary weights are only determined after the Nengo model is constructed, they can be modified within the
simulator block using code similar to this post: Targeted synapse removal
While I haven’t tested this, TensorFlow (and by extension, NengoDL) allows you to pass in a
dtype parameter when creating layers. I’m not 100% sure if you can create your own binary
dtype object (i think the closest I’ve seen is
int8) in TensorFlow, but if you can, it should be theoretically possible to create a network using that
dtype when creating the layers.
This is idea that was suggested by my colleague. TensorFlow allows you to apply a quantization to connections weights (see: here and here). We have some custom TensorFlow code used for other projects that have tested the quantization down to 4 bits, so it may be possible to extend this to using binary values instead.
Using the FPGA
Here’s where I have some disappointing news. If you are intending to use our NengoFPGA platform to simulate Nengo networks on an FPGA, the data type used for the connection weight matrices are hardcoded into the compiled bitstream. This means that an end-user (like yourself) would be unable to customize the data type to anything other than a floating point number. However, the first two approaches I mentioned above (“emulating” binary values with floating point numbers) would work with NengoFPGA as well.