The nengo.networks.Integrator class will accept a `dimensions`

argument to implement a “multi-dimensional integrator” as you suggested. But, as you probably discovered, this network assumes your cumulative sum (scaled by `dt`

) will be a vector with L2-norm <= 1 (since the sum is represented by a single `Ensemble`

with `radius=1`

) .

Often this can be solved by scaling / transforming the input’s dimensions (using the `transform`

parameter on the connection to the integrator) and then compensating for this scaling by inverting the transformation on the output. For example, you might halve the first dimension of the input, and then compensate by doubling the first dimension of the output.

If this still doesn’t work… it’s possible that either:

- the dimensions are correlated in a way that causes their cumulative sum’s L2-norm to exceed the radius of 1,
- the ensemble is finding it difficult to accurately represent your high-dimensional vector, or
- there is some unacceptable drift in the integrator (this is often the case for higher dimensions)

There are more details to be said for each of these points. First I would make sure that I’ve properly diagnosed the problem (`Direct`

mode neurons are helpful for this; another helpful strategy would be to `Probe`

your input data and then carry out the integration offline to see what the ideal looks like) . Then depending on the problem, you can try increasing the number of neurons, decreasing the time-step, or creating a separate integrator for each dimension (note: this is analogous to using an `EnsembleArray`

to represent the integral).

The last approach in particular is often quite helpful when you want to minimize any interactions between the dimensions. Note to ease implementation, you can use the slice notation to connect each dimension to each integrator:

```
for i in range(dimensions):
nengo.Connection(u[i], integrator[i], transform=...)
```

Let us know what you find out!