I was trying to apply oja’s rule with toy data, but I can’t seem to find any examples where the input to two different ensembles is data as opposed to a function.
However when I attempt to create a simplified version, I keep getting the following error:
def stim_ans(t):
----> 9 return [input_ex[t][0], input_ex[t][1], input_ex[t][2],input_ex[t][3]]
10
only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices
As a future follow up, is it possible to have weights learn via backprop (DL) and local rules? Would I have to use the deepRL version?
This is because t is a floating type (time, in seconds), and not an integer nor some other type that is a valid index. To convert it into an integer, you may want to divide by dt (default of 0.001) and cast to int. Note that the first time-step is t = dt (not 0).
where presentation_time is the time (in seconds) that each element of the array is to be presented (e.g., presentation_time = dt to provide one element per time-step).
You can use any learning rule with NengoDL. It might be tricky to successfully apply online and offline learning to the same parameters (because there would be feedback effects), but you’d be free to explore that.