Any suggestions on computing the cumulative sum of a vector-valued input signal? In principle I think I want to do something like a multi-dimensional integrator but the issue I’m running into is in setting the time constants and radius of the ensemble because it may need to be different for different components of the vector (say one component is much smaller than the others). Is there a standard way to do this?

The nengo.networks.Integrator class will accept a dimensions argument to implement a “multi-dimensional integrator” as you suggested. But, as you probably discovered, this network assumes your cumulative sum (scaled by dt) will be a vector with L2-norm <= 1 (since the sum is represented by a single Ensemble with radius=1) .

Often this can be solved by scaling / transforming the input’s dimensions (using the transform parameter on the connection to the integrator) and then compensating for this scaling by inverting the transformation on the output. For example, you might halve the first dimension of the input, and then compensate by doubling the first dimension of the output.

If this still doesn’t work… it’s possible that either:

the dimensions are correlated in a way that causes their cumulative sum’s L2-norm to exceed the radius of 1,

the ensemble is finding it difficult to accurately represent your high-dimensional vector, or

there is some unacceptable drift in the integrator (this is often the case for higher dimensions)

There are more details to be said for each of these points. First I would make sure that I’ve properly diagnosed the problem (Direct mode neurons are helpful for this; another helpful strategy would be to Probe your input data and then carry out the integration offline to see what the ideal looks like) . Then depending on the problem, you can try increasing the number of neurons, decreasing the time-step, or creating a separate integrator for each dimension (note: this is analogous to using an EnsembleArray to represent the integral).

The last approach in particular is often quite helpful when you want to minimize any interactions between the dimensions. Note to ease implementation, you can use the slice notation to connect each dimension to each integrator:

for i in range(dimensions):
nengo.Connection(u[i], integrator[i], transform=...)

Another important note, since I’m not sure where this appears in our documentation: it is important that the input connection uses the same synapse=recurrent_tau that you provide to the integrator network in order to get true integration (see “Principle 3” of the NEF). But you should watch out for double-filtering if you’ve made a connection to an intermediate node for instance. Furthermore, if your neurons are spiking, the output of the integrator will need to be filtered as well with some synapse (usually the default), in order to be ‘usable’ downstream, and so you will be observing the filtered version of the true integral. There are also approaches that we can take if there’s too much or too little filtering. Sometimes these things are important to know if you’re looking to make things precise, and so we can discuss these details in turn if necessary.

Also, in general you want the recurrent_tau to be fairly large (on the order of 0.1 is usually good)!

Thanks very much for the helpful reply! I’ve been able, based on your suggestions, to get something that works reasonably well, although at very high dimensions I am seeing some drift as you mentioned.