Hi everyone,
In my research, I am trying to use SNNs for model predictive control (MPC). Currently, I am trying to wrap my head around Nengo and Nengo DL to see if they are suitable for this task.
In my prediction network, I would like to predict a future system state that lies Δt seconds in the future. Please note that Δt may be much larger than the dt
parameter of the Nengo simulation (e.g. 0.02 vs 0.001). As the model is ought to be auto-regressive, the prediction should be used as input of another node after Δt. Based on my understanding, this requires the output to be “stored” in some buffer of a node and then released later. I have come across some implementations for Nengo that allow such a “discrete delay”. I think the delay is considered “discrete” because in such cases the signal is held for int(dt_delay / dt_sim)
steps. However, both implementations I found do not work “out of the box” with Nengo DL, and I have no idea (but would like to know) if there is a way to do any of this also on a Loihi chip.
The implementations I came across are this one by @tcstewar:
class DiscreteDelay(nengo.synapses.Synapse):
def __init__(self, delay, size_in=1):
self.delay = delay
super().__init__(default_size_in=size_in, default_size_out=size_in)
def make_state(self, shape_in, shape_out, dt, dtype=None, y0=None):
return {}
def make_step(self, shape_in, shape_out, dt, rng, state=None):
steps = int(self.delay/dt)
if steps == 0:
def step_delay(t, x):
return x
return step_delay
assert steps > 0
state = np.zeros((steps, shape_in[0]))
state_index = np.array([0])
def step_delay(t, x, state=state, state_index=state_index):
result = state[state_index]
state[state_index] = x
state_index[:] = (state_index + 1) % state.shape[0]
return result
return step_delay
...
nengo.Connection(predicted_future_state, predicted_current_state,
synapse=DiscreteDelay(self.t_delay, size_in=self.state_dim))
I also tried this implementation from nengolib (taken from the nengo3 branch): arvoelke.github.io/nengolib-docs/nengolib.synapses.DiscreteDelay.html
which is used similarly as
nengo.Connection(predicted_future_state, predicted_current_state,
synapse=DiscreteDelay(int(self.t_delay / self.dt)))
Anyways, both give me error messages I must admit I do not fully understand. I have come up with my own working version that I currently use in my project with Nengo DL, but it is VERY hacky and relies on some assumptions that I am not sure (will) always hold.
class DelayNode:
def __init__(self, steps):
assert steps >= 0
self.steps = steps
self.hist = [[]]
self.last_t = 0.0
def reset(self):
self.hist = [[]]
self.last_t = 0.0
def step(self, t, x):
if self.last_t > t:
self.reset()
elif self.last_t < t:
self.last_t = t
self.hist.append([])
if len(self.hist[0]) == 0 and len(self.hist) > 1:
self.hist.pop(0)
self.hist[-1].append(x)
if len(self.hist) < self.steps:
return np.zeros(x.shape)
return self.hist[0].pop(0)
...
model.delay = DelayNode(steps=int(self.t_delay / self.dt))
delaynode = nengo.Node(model.delay.step, size_in=self.state_dim)
nengo.Connection(predicted_future_state, delaynode)
nengo.Connection(delaynode, predicted_current_state)
Firstly, I observed by simply printing some toy inputs to the network, that the node input is processed not in batches but in a sequence that (I think) follows the same pattern: e1_t0, e2_t0, ... en_t0, e1_t1, e2_t1, ... en_t1, ...
Then, this node infers the batch size n
based on the point when t
increases to any t > last_t
. However, assuming that the model “has knowledge” of the batch size to come is also not really great…
Finally, the node automatically resets its buffer when it observes a t < last_t
as this seems to indicate that a new batch of data is being processed. I would have liked to use t == 0.0
, but it seems that the simulation always starts at t = dt
(not sure why?).
My key question is if there are other solutions to my problem that I am unaware of, but perhaps some of you have worked with in the past, in particular in a Nengo DL context? Anyways, any input here is very welcome as I am convinced this is not the best solution for my problem.
Thank you for your kind assistance and best wishes,
Justus