How do you handle live data feeds for the LMU?


All the training has been done via CSV or DB, and I am not quite sure how go about feeding the data into the LMU from a live stream. The data can be displayed via a web interface which refreshes for each new datapoint, so I have thought about chopping it into managable sections, then downloading those, and feeding them into the LMU in chunks, but it doesn’t seem ideal.

I can use a python API to pull the data as well, which is just a live stream of data (the exact same data as displayed on the web page) but I can’t quite figure out how to feed this into the LMU live, one new datapoint at a time as they occur.

Another issue is, that there are two datasources which in the DB, I combined to feed into the LMU. The nanosecond data doesn’t usually match up, so when there is a datapoint for one, that doesn’t mean there is a datapoint for the other, so there is that as well.

If anyone has any experience with this, code samples of what they have done, or suggestions, I would be eternally grateful, as it hasn’t been posted about on the forum before that I could find, this may help others as well.

Thanks in advance!

Hi @fladventurerob,

The exact implementation would depend on how you have implemented the LMU. If it is implemented in a Nengo network, then you should be able to use a nengo.Node to stream in data into your network. The one thing to note is that if you do this, you’ll need to run the Nengo simulation in an “infinite” (no stop time) mode:

with nengo.Simulator(model) as sim:
    while <some condition>
        sim.step()  # Runs the simulation for one timestep

There are other run functions as well. You can refer to the nengo.Simulator documentation for them.