I have a specific use case for a Digilent PYNQ-Z1 board. I would like to run a Nengo NN on a PYNQ-Z1 to perform adaptive control of a plant model that runs on another board, say Raspberry Pi, under some flavor of Linux in “soft real-time.” The plant model is generated with Simulink Coder, etc.
From the online description (https://www.xilinx.com/products/boards-and-kits/1-hydd4z.html) I understand that the PYNQ-Z1 runs Linux on its CPU and it has controllers (and I assume drivers) for UART, 1Gbps Ethernet, etc.
Right now I am thinking about connecting the PYNQ-Z1 and the Raspberry Pi through UART. Later on, if needed I can try UDP/IP. Do you have any advice or experience with setting up and running such a hybrid? Are there any obstacles that fail the sniff test right away?
BTW, I cannot afford the $10k for a Simulink HDL coder license to target the FPGA and run the plant on it as well. So I have to jerry-rig a hybrid of some sorts.
We have forwarded your query to the relevant team and they will respond early next week.
Thank you for your patience.
Hi Bogdan and thank you for your patience.
You have two options to get Nengo running on the PYNQ-Z1 boards. Both options are detailed below.
Since the PYNQ sd images come pre-configured with a python interpreter, you can install Nengo on the boards by using the standard
pip install nengo command (assuming there is an internet connection). With this, Nengo will run the neural simulations using the ARM processor, and all of the examples found on https://nengo.ai will work without any modifications.
However, we do provide an alternative solution which leverages the FPGA hardware to run the neural simulations faster than they would just using the PYNQ-Z1’s ARM processor. This is our NengoFPGA (or Brainboard) system. To use NengoFPGA, you will need to purchase a license. NengoFPGA currently isn’t distributed with the ability to run models directly from the PYNQ’s ARM (we use a remote PC host), however it is certainly possible and we’ve done so internally for testing. Whether or not you run from the ARM directly or maintain the remote PC host, adding the plant model to the system would entail modifying the scripts and drivers shipped with NengoFPGA to accommodate the additional data.
I have a couple additional comments which apply with or without NengoFPGA. We use UDP/IP when interfacing Nengo with remote devices, it’s simple enough to drop socket code into a Nengo Node. We have never used UART (as far as I know) as an interface, but if it’s possible to do in Python, then it should be easy to place into a Nengo Node as well. In fact, if you are able to import C code or anything else using a Python wrapper then it should be relatively simple to add into a Nengo Node. Lastly, the ARM processor on the PYNQ boasts moderate performance, but be weary of running large models or pieces of code on the ARM as it will undoubtedly underperform compared to a full desktop or laptop computer.
Ben, I appreciate the detailed reply and no worries. I’ve been teaching myself Python in the meantime.
I made a diagram, see below, to help me illustrate the tool chains and how I can set them up. For the time being the plan is to run the plant model on a Raspberry Pi and interface with Nengo FPGA on PYNQ-Z1 via UDP/IP. If you have any feedback, please let me know.
It would be nice to use only one board and have the plant model run on the PYNQ-Z1 ARM processor but, to my knowledge, it is not supported by the Simulink tool chain vendors. I’ll report back here as I progress through the project.
May I ask what your goal is by using the FPGA? As NengoFPGA is relatively new and continuing development there are currently some limitation and I want to make sure it will meet your needs. Similarly, it may be useful to explore your proposed system with standard Nengo before adding the FPGA piece to flesh out details and ensure NengoFPGA is right for you.
If you do decide to pursue NengoFPGA, I want to note a few things to ensure we are on the same page. The NengoFPGA package effectively adds a new ensemble type to be run on the FPGA (
FpgaPesEnsembleNetwork) while the remainder of your network or model will be standard Nengo. NengoFPGA is an integrated extension of Nengo, not an entirely separate system. Typically we have a Nengo model running on the host PC with a portion designated to be run on the FPGA. The NengoFPGA system uses the ARM processor as an interface between the FPGA and the host PC and, in fact, does not require Nengo on the ARM at all. So it seems it may be easier to integrate the plant model with NengoFPGA (and Nengo) on the host PC rather than a direct interface to the PYNQ.
That being said, it is possible to run directly from the PYNQ’s ARM with some modifications, as noted in my previous reply. It really depends on your use case and the amount of effort you are willing to invest. With respect to your plant model, we’ve never tried integrating Simulink models but the PYNQ has a standard ARM processor running Ubuntu, so it might be possible to put the plant on the PYNQ as well, I’m not sure. On the other hand, you may be able to just run Nengo on the Raspberry Pi and use a single device that way if it meets your needs.
Thank you again for the details. Until I read your latest reply I assumed that I could run a standalone Nengo application on the PYNQ-Z1 FPGA, with comm support from the ARM. Under this assumption I was hoping to use the FPGA as a surrogate for a Loihi board or a similar piece of hardware that supports Nengo SNNs. From your description I realized that my assumption was wrong.
However, I like your suggestion to run both the Simulink-generated plant model and Nengo on a single Raspberry Pi board. I know that Mathworks modifies Raspbian (a distro of Debian) to make it work with Simulink but it’s worth trying.
Also, I have already started to prototype the plant model + Nengo SSN in Python as you have suggested. In the meantime I am working on a proposal for the Nengo Summer School as well where I’d like to explore this project further.