I am developing a lecture on the vestibular system and I was planning to follow the steps and model in chapter 6.5 of the NEF book. I was presuming that i would find this example somewhere, but so far no cookie (a search for ‘vestibular’ returns nothing). I was wondering if you guys still have the implementation of the vestibular system in some older version of Nengo? Maybe it’s in some dusty old hard drive lost somewhere, lost now since the cognitive is all the rage =]

In gratitude I am very willing to port/adapt and make it into a nice tutorial (jupyter notebook/ google colab) and post it here.

Unfortunately, it seems like this example is pretty old. The first reference to it is this 2002 paper. Even the super early versions of Nengo didn’t exist until 2007, so it’s likely the code used to generate the plots for this example are in Matlab. It may be that @chris still has that code around somewhere, but I wouldn’t bet on it.

Looking at the description of the model, it doesn’t seem too complicated to replicate. If I have the time over the next few days, I can look at recreating the model in the current version of Nengo, but no guarantees!

And indeed, I didn’t think it was too hard to replicate, but in my experience there can be hidden challenges in implementing Nengo models, so I thought I’d ask. If you get the chance to recreate it I’d be eternally thankful – or maybe if you would find it easier to jot down some pseudo code, I could also go ahead and fleshing it out.

The Book Implementation
The text in the chapter explains how Chris computes the cross product for this particular example. Namely:

In the multiplication example (available here for Nengo. ), the multiplication of two scalar values is computed by first projecting both values into a single 2D ensemble, and then solving for decoders that compute the multiplication of both inputs as an output. Similarly, the cross-product can be computed by projecting the two 3D inputs into a 6D ensemble, and then computing decoders such that the decoders compute the cross product. In Nengo, this can be done with the following code.

First, we make a 6D ensemble, and we project the two 3D inputs into the first 3, and latter 3 dimensions respectively:

Next, we define the function that take a 6D vector and computes the cross product:

def cross_func(x):
return np.cross(x[:3], x[3:])

Finally, we create an output node (so that a connection can be made with the appropriate decoders), and make a connection using the cross_func to solve for the decoders. In Nengo, this is done by specifying the function parameter when creating a nengo.Connection:

An Alternative Implementation
From this wiki page (to be honest, I had forgotten the exact formulation of the cross product, so I had to look it up ), given two vectors $\mathbf{A} = [a_0, a_1, a_2]$ and $\mathbf{B} = [b_0, b_1, b_2]$, the cross product $\mathbf{A} \times \mathbf{B}$ can be computed as:
$\mathbf{A} \times \mathbf{B} = [a_1 b_2 - a_2 b_1, a_2 b_0 - a_0 b_2, a_0 b_1 - a_1 b_0]$

In essense, the cross product is a sum (I’m taking minuses here as sums as well) of products. In Nengo, nengo.networks.Products can be used to create an array of product computations. Thus, the cross product can be computed using this product array and some clever connection projections.

For example, a 6-dimensional Product array is defined as follows:

Next, the product $a_1 b_2$ can be computed with the connection which uses array indexing to connect the respective inputs to the appropriate product network:

We’ll then repeat the connection projections for the other products that need to be computed, as well as to collapse the 6 product results into the desired 3D output.

Some Code
I’ve put both implementations into the following script: test_cross_product.py (3.9 KB)
You can run the code and experiment with various network parameters (e.g., change the radii of the ensembles, or change the number of neurons for the ensembles) to see how it impacts your simulation speed, and/or the accuracy of the networks.

Both of the implementations have their advantages and disadvantages. In theory, the product array implementation should build faster (because it’s computing the decoders for six 300 neuron populations, rather than one big 1800 neuron population), and be slightly more accurate (because there is less cross-talk between the input dimensions). But, the code for the one-ensemble implementation is simpler (and more faithful to the book’s implementation).

It is interesting to think about the differences between the two types of computations – and the complexity of the decoding step – in the context of brain operations as well. Turns out that not only the otoliths and canals have segregated outputs, but so do the vestibular nuclear populations (and cerebellar floccular cells) in the horizontal and vertical directions. One of the lovely things about the NEF is that it evinces the difficulties in particular implementations, and it makes sense to me that tuning the combinations as in implementation 2 make sense with the anatomy of the system.

It would be interesting to find out if this alternative implementation produces any behavioural differences when compared to Chris’ original model (see Fig 6.15 and 6.16 in the NEF book). If I were to hazard a guess, I think it would more closely match the equation prediction rather than the experimental data, which may point to needing a different projection or implementation. Keep us updated on the progress of this model.