I need to implement a delay node with variable delay times. The amount of delay would depend on what is represented in the spa.State
that connects to that node (instead of spa.State
it could also be an ensemble). Let me give an example to clarify this a bit.
I have two SPA states, state1
and state2
, and they are connected through a node in the following way:
state1 -> delay_node -> state2
They both share the same vocabulary with semantic pointers A, B
and C
. When A
is present in state1
, its representation in state2
should be delayed by .5 sec. If B
is present, then 1.0 sec. C for 2 sec.
For now, I set the input to the state1
manually in GUI.
By modifying the Delay Node example, I managed hack a solution that seems to work fine:
import nengo
from nengo import spa
import numpy as np
dt = 0.001
d = 16
words = ['A', 'B', 'C']
delays = {'A': .5, 'B':1., 'C':2.}
max_delay = np.max(delays.values()) # max expected delay
vocab = spa.Vocabulary(dimensions=d)
for word in words:
vocab.parse(word)
class Delay(object):
def __init__(self, dimensions, timesteps=50):
self.history = np.zeros((timesteps, dimensions))
def step(self, t, x):
roll_i = -1
sim = np.dot(vocab.vectors, x)
i = np.argmax(sim)
if sim[i] < 0.5: # assume noise
roll_i = -1
else: # assume stable SP
roll_i = -int(max_delay // delays[words[i]])
self.history = np.roll(self.history, roll_i, axis=0)
self.history[roll_i] = x
return self.history[0]
delay = Delay(d, timesteps=int(max_delay / dt))
with spa.SPA() as model:
model.state1 = spa.State(dimensions=d, vocab=vocab)
stim = nengo.Node(delay.step, size_in=d, size_out=d)
nengo.Connection(model.state1.output, stim)
model.state2 = spa.State(dimensions=d, vocab=vocab)
nengo.Connection(stim, model.state2.input)
This code assumes maximal delay, and at every time step checks whether a SP it knows is present in state1
. If so, it will “fast-forward” history by the amount specified through max_delay/specific_delay
.
This seems to work ok, but I wonder if there is a better way to do it, and whether anyone also spent some time thinking about this wants to share their wisdom. I expect that when I scale up the number of vectors in my vocabulary (few hundreds), this solution could become somewhat slow.