Hebbian Learning with data

I was trying to apply oja’s rule with toy data, but I can’t seem to find any examples where the input to two different ensembles is data as opposed to a function.

The only examples I could find were these (which hopefully clarify what I mean): https://github.com/s72sue/std_neural_nets/blob/master/classification/3-class_classification.ipynb

However when I attempt to create a simplified version, I keep getting the following error:
def stim_ans(t):
----> 9 return [input_ex[t][0], input_ex[t][1], input_ex[t][2],input_ex[t][3]]
only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices

As a future follow up, is it possible to have weights learn via backprop (DL) and local rules? Would I have to use the deepRL version?

Anyway, any help is appreciated,


This is because t is a floating type (time, in seconds), and not an integer nor some other type that is a valid index. To convert it into an integer, you may want to divide by dt (default of 0.001) and cast to int. Note that the first time-step is t = dt (not 0).

Alternatively, nengo.processes.PresentInput can be convenient for such inputs, e.g.,

input_stim = nengo.Node(output=nengo.processes.PresentInput(input_ex, presentation_time))

where presentation_time is the time (in seconds) that each element of the array is to be presented (e.g., presentation_time = dt to provide one element per time-step).

@Eric has some examples learning weights online via “feedback alignment” in Nengo, which is essentially a more biologically plausible version of backprop. Here is one: https://github.com/hunse/phd/blob/master/scripts/learn_mnist/online_mnist.py @tcstewar and/or @astoecke may have something more recent. If you are okay with offline backprop, then NengoDL would be your best bet.

Thanks so much! I’ll give it a go with Nengo first.

I’m still not clear about how other learning rules are integrated into NengoDL.

You can use any learning rule with NengoDL. It might be tricky to successfully apply online and offline learning to the same parameters (because there would be feedback effects), but you’d be free to explore that.