I have the following setup where a learn
node is calculating a weight matrix to learn to approximate a function between the pre
and post
ensembles:
learn = nengo.Node( memr_arr,
size_in=pre_nrn + pre.dimensions,
size_out=post_nrn,
label="Learn" )
nengo.Connection( inp, pre )
nengo.Connection( pre.neurons, learn[ :pre_nrn ], synapse=0.01 )
nengo.Connection( pre, learn[ pre_nrn: ], synapse=0.01 )
nengo.Connection( learn, post.neurons, synapse=None )
Instead of having an external ensemble calculate the error and project it back to learn
, I would like to calculate the error directly inside the learn
node, in order to not have to worry about network delays.
How would I go about doing this? Inside learn
I could have:
def __call__( self, t, x ):
input_activities = x[ :self.input_size ]
ground_truth = self.function_to_learn( x[ self.input_size: ] )
# query each memristor for its resistance state
extract_R = lambda x: x.get_state( t, value="conductance", scaled=True )
extract_R_V = np.vectorize( extract_R )
weights = extract_R_V( self.memristors )
# calculate the output at this timestep
return_value = np.dot( weights, input_activities )
# calculate error
error = return_value - ground_truth
self.error_history.append( error )
Does that seem correct or am I still having to deal with some delay in the error signal?
return_value
is the weight matrix so I can’t simply say error = return_value - ground_truth
, how would I actually compute the value that post
would represent in order to calculate the error?
Am I going about this all wrong? I also thought of projecting back from post
to learn
but that again would introduce a certain delay that wouldn’t be easily quantifiable.
Another idea would be to introduce a buffer in my learn
node to accumulate error signals and do a delayed update when only the error signal for a given timestep arrives.
How does the PES rule deal with the delay in the error when modulating a connection?