All the training has been done via CSV or DB, and I am not quite sure how go about feeding the data into the LMU from a live stream. The data can be displayed via a web interface which refreshes for each new datapoint, so I have thought about chopping it into managable sections, then downloading those, and feeding them into the LMU in chunks, but it doesn’t seem ideal.
I can use a python API to pull the data as well, which is just a live stream of data (the exact same data as displayed on the web page) but I can’t quite figure out how to feed this into the LMU live, one new datapoint at a time as they occur.
Another issue is, that there are two datasources which in the DB, I combined to feed into the LMU. The nanosecond data doesn’t usually match up, so when there is a datapoint for one, that doesn’t mean there is a datapoint for the other, so there is that as well.
If anyone has any experience with this, code samples of what they have done, or suggestions, I would be eternally grateful, as it hasn’t been posted about on the forum before that I could find, this may help others as well.
Thanks in advance!