Event-based input to a model

Hello All,

As a newbie, I’m enjoying the nengo a lot! using the examples, I can see that a set of non-neural inputs can be fed into a model by nengo.Node() and nengo.processes.PresentInput().

Now lets say I have saved the output stream of an event-based sensor such as DVS into multiple files, each file keeps all the events related to one scene, for example I expose one finger to the camera and save all the events in one file, then expose two fingers to the camera and save all the events into a different file. Now I’ve a set of files and each includes a set of events. How can I use them as input to my model for training purposes?

To be more clear lets compare my problem with the MNIST example:

In MNIST example, inputs are MNIST files (images) presented at time interval t=0.1
In my data set, each file is a set of events. I need to present each file separately for training and each file has a set of events that need to be presented with a time interval t.

Any help is appreciated.

1 Like

We don’t have a built-in way to feed in event data like DVS (although that would actually be a good TODO item now that I think about it). In Nengo input/output signals are always represented as vectors. So, for example, if you have DVS data with 256 pixels, you would want to change that to a 256-dimensional vector, where each element in that vector would be, e.g., 1 if that pixel fired an event on a given timestep, and 0 otherwise. You would then write a Node that would output your converted sensor data on each timestep. Something like:

def dvs_input(t):
    events = my_dvs_data[t]
    output = convert_to_vector(events)
    return output

input_node = nengo.Node(dvs_input)

(where t is the current simulation timestep).

Or if you wanted to generate training data for nengo-dl, you would do something like:

training_data = [[convert_to_vector(my_dvs_data[t]) for t in n_steps] 
                 for my_dvs_data in all_my_files_in_dataset]
sim.train({input_node: training_data}, ...)
1 Like

Thanks Daniel, I’ll try this. It seems what I need!

Ok, I have it working now. There is one thing that I need to double check and make sure that I am doing it right.
My training-data is a 3-dimensional array: [n-files, n-steps, input-vector(t)] my targets were originally a 2-dimensional array [n-files, label] now because of my inputs, I had to provide the sim.train() with 3-dimensional targets so I just created new targets [n-files, n-steps, label] using the following code:

for all targets:
for t in n_steps:
new_target[:, t, :] = old_target

Does it sound about right?

Yep that’s exactly right. If you want to avoid the for loops you could do new_targets = np.tile(old_targets[:, None, :], (1, n_steps, 1)), but that’s just splitting hairs :slight_smile: .

One other thing: I’m not sure if this is your use case, but sometimes you only want to specify the targets on certain timesteps (e.g., the last timestep), and you don’t care what the output is on other timesteps. If you are using the default "mse" objective, then you can do this by setting the target to np.nan on those timesteps you don’t care about. If you’re using a custom objective function you’ll need to build that logic in yourself though.

Thanks for the tip on the loop, as a HW guy I’m also new in python :slight_smile: I definitely will try the target-on-last timestamp. I’m wondering if it can make the training faster.