Delay Node with a variable delay time

I need to implement a delay node with variable delay times. The amount of delay would depend on what is represented in the spa.State that connects to that node (instead of spa.State it could also be an ensemble). Let me give an example to clarify this a bit.

I have two SPA states, state1 and state2, and they are connected through a node in the following way:
state1 -> delay_node -> state2

They both share the same vocabulary with semantic pointers A, B and C. When A is present in state1, its representation in state2 should be delayed by .5 sec. If B is present, then 1.0 sec. C for 2 sec.

For now, I set the input to the state1 manually in GUI.

By modifying the Delay Node example, I managed hack a solution that seems to work fine:

import nengo
from nengo import spa
import numpy as np

dt = 0.001
d = 16

words = ['A', 'B', 'C']
delays = {'A': .5, 'B':1., 'C':2.}

max_delay = np.max(delays.values())  # max expected delay

vocab = spa.Vocabulary(dimensions=d)
for word in words:

class Delay(object):
    def __init__(self, dimensions, timesteps=50):
        self.history = np.zeros((timesteps, dimensions))

    def step(self, t, x):
        roll_i = -1
        sim =, x)
        i = np.argmax(sim)
        if sim[i] < 0.5:   # assume noise
            roll_i = -1
        else:  # assume stable SP
            roll_i = -int(max_delay // delays[words[i]])
        self.history = np.roll(self.history, roll_i, axis=0)
        self.history[roll_i] = x
        return self.history[0]

delay = Delay(d, timesteps=int(max_delay / dt))

with spa.SPA() as model:
    model.state1 = spa.State(dimensions=d, vocab=vocab)
    stim = nengo.Node(delay.step, size_in=d, size_out=d)
    nengo.Connection(model.state1.output, stim)
    model.state2 = spa.State(dimensions=d, vocab=vocab)
    nengo.Connection(stim, model.state2.input)

This code assumes maximal delay, and at every time step checks whether a SP it knows is present in state1. If so, it will “fast-forward” history by the amount specified through max_delay/specific_delay.

This seems to work ok, but I wonder if there is a better way to do it, and whether anyone also spent some time thinking about this wants to share their wisdom. I expect that when I scale up the number of vectors in my vocabulary (few hundreds), this solution could become somewhat slow.

A few thoughts:

  • It seems to me that for roll_i > 1, you forget to fill some values in self.history which might be a problem when roll_i decreases at a later point. But I haven’t tried out the code and might imagine thinks the wrong way in my head.
  • I don’t know how the roll function is implemented, but it probably copies the memory contents which might make things slow. Instead of shifting values around in memory, it could me more efficient to use a ring buffer where values stay at the same memory location and you change only the index that points to the first element.
  • The current solution only allows delays that are multiples of the time step. That might be fine, but if you require to relax this constraint you can use (circular) convolution to implement vector shifts that are not constrained to integer shifts. However, I don’t expect that to be more efficient than the current solution.
1 Like

One more thing: The Delay object technically depends on the time step. Using a nengo.Process you could implement the delay in a way that works correctly with regard to the time step that is set on the simulator. But it is also slightly more difficult to implement and not necessarily worth it if you do not change the time step.