Function on a Connection with Loihi

A quick question about approximating functions with neurons:

I noticed in the multiplication example, and with my quick experimenting, that a user-defined function used on a connection can only act on the input of the connection. An easy way to make the function take in multiple arguments is to increase the dimensionality of the input to the connection, and index the arguments as needed (this was done in the multiply example).

Using Loihi, I continue trying to create ensembles with a dimension greater than 1, only to run into errors. Are higher dimensional ensembles not supported for Nengo Loihi? If that’s the case, I don’t see any obvious way to apply a function on multiple inputs if the input of the connection can only represent a single input.

If that doesn’t make sense, please let me know. I can draft a couple of code snippets to try and clarify my point, but hopefully it comes across clearly.

Thanks!

Hi @luke and welcome! What error(s) did you run into? Could you provide some code and an error message? Thanks!

Hello Aaron! Sure thing.

Here, I’m adopting the addition example from the Nengo Core documentation. Here is the network and output when using the core simulator in Jupyter:

%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import nengo
import nengo_loihi

with nengo.Network(label="Addition") as model:
    inp1 = nengo.Node(output=2)
    inp2 = nengo.Node(output=4)
    
    A = nengo.Ensemble(200, dimensions=2, label="Input Ensemble", radius=10)
    B = nengo.Ensemble(100, dimensions=1, label="Output Ensemble", radius=10)
    
    nengo.Connection(inp1, A[0])
    nengo.Connection(inp2, A[1])
    nengo.Connection(A, B, function=lambda x: x[0] + x[1])
    
    inp1_p = nengo.Probe(inp1, 'output', label="input 1")
    inp2_p = nengo.Probe(inp2, 'output', label="input 2")
    A_p = nengo.Probe(A, 'decoded_output', synapse=0.05, label="representation")
    B_p = nengo.Probe(B, 'decoded_output', synapse=0.05, label="added")
    
with nengo.Simulator(model) as sim:
    sim.run(10)

# Plot the decoded output of the ensemble
plt.figure(figsize=(12,8))
plt.plot(sim.trange(), sim.data[inp1_p], label=inp1.label)
plt.plot(sim.trange(), sim.data[inp2_p], label=inp2.label)
plt.plot(sim.trange(), sim.data[A_p], 'r', label=A_p.label)
plt.plot(sim.trange(), sim.data[B_p], 'g', label=B_p.label)
plt.legend()
plt.xlabel('time [s]');

Now, when trying to run on Loihi:

nengo_loihi.set_defaults()
with nengo.Network(label="Addition") as model:
    inp1 = nengo.Node(output=2)
    inp2 = nengo.Node(output=4)
    
    A = nengo.Ensemble(200, dimensions=2, label="Input Ensemble", radius=10)
    B = nengo.Ensemble(100, dimensions=1, label="Output Ensemble", radius=10)
    
    nengo.Connection(inp1, A[0])
    nengo.Connection(inp2, A[1])
    nengo.Connection(A, B, function=lambda x: x[0] + x[1])
    
    inp1_p = nengo.Probe(inp1, 'output', label="input 1")
    inp2_p = nengo.Probe(inp2, 'output', label="input 2")
    A_p = nengo.Probe(A, 'decoded_output', synapse=0.05, label="representation")
    B_p = nengo.Probe(B, 'decoded_output', synapse=0.05, label="added")
    
with nengo_loihi.Simulator(model) as sim:
    sim.run(10)

Does the end of this stack trace indicate that I have to include encoders for each of the ensembles? If so, how?

Then, if I try and represent in each of the inputs in their own dedicated ensembles instead of trying to represent both in A:

nengo_loihi.set_defaults()
with nengo.Network(label="Addition") as model:
    inp1 = nengo.Node(output=2)
    inp2 = nengo.Node(output=4)
    
    
    #A = nengo.Ensemble(200, dimensions=2, label="Input Ensemble", radius=10)
    A = nengo.networks.EnsembleArray( 200, 
                                      n_ensembles=2, 
                                      ens_dimensions=1, 
                                      radius= 10 )
    B = nengo.Ensemble(100, dimensions=1, label="Output Ensemble", radius=10)
    
    A.output.output = lambda t, x: x
    nengo.Connection(inp1, A.ea_ensembles[0])
    nengo.Connection(inp2, A.ea_ensembles[1])
    nengo.Connection(A.output, B, function=lambda x: x[0] + x[1])
    
    inp1_p = nengo.Probe(inp1, 'output', label="input 1")
    inp2_p = nengo.Probe(inp2, 'output', label="input 2")
    A_p = nengo.Probe(A.output, synapse=0.05, label="representation")
    B_p = nengo.Probe(B, 'decoded_output', synapse=0.05, label="added")

with nengo_loihi.Simulator(model) as sim:
    sim.run(10)

…

This leads me to think that when using Loihi as the backend, one must dedicate entire ensembles to each value that one would like to use. Is this true?

Also, when applying a function on a connection, how can I incorporate other inputs (aside from using transform to apply weights)? For example, say I wanted to apply this function across a connection from x to some other ensemble:

def my_func(x, value):
    return x + 2*value

How could I include the second argument? So far, I’ve only been able to get a connection function to take in a single argument, being the input object of the connection. Two scenarios where I might want to include value is when

  1. value is the decoded output of a separate ensemble
  2. value is the output from a node

An immediate remedy seems to try breaking down the computation across multiple objects, but I thought you might have a recommendation to more easily implement this.

Hopefully that clarifies some things! I will same remaining questions for later in the thread since this post is so long. :slight_smile:

Thanks!

Hi @luke

NengoLoihi doesn’t currently support slicing on inputs to nengo.Ensemble objects that are implemented on the Loihi board. The simple fix to your issue is to combine the inp nodes into one 2D output instead of having two 1D outputs, like so:

with nengo.Network(label="Addition") as model:
    inp = nengo.Node(output=[2, 4])  # Combined inp1 and inp2 
    
    A = nengo.Ensemble(200, dimensions=2, label="Input Ensemble", radius=10)
    B = nengo.Ensemble(100, dimensions=1, label="Output Ensemble", radius=10)
    
    nengo.Connection(inp, A)  # Combined inp removes need for slicing into A
    nengo.Connection(A, B, function=lambda x: x[0] + x[1])

Does the end of this stack trace indicate that I have to include encoders for each of the ensembles? If so, how?

To get further into the stack trace, the stack trace happens in the builder for the nengo.Connection object. The builder is our “compiler” that turns your Nengo code into a bunch of operations that are then put on the Loihi board. The code block in question is:

    elif isinstance(conn.post_obj, Ensemble):
        assert isinstance(post_obj, LoihiBlock)
        assert pre_slice == slice(None), "Not implemented"
        assert post_slice == slice(None)
        assert target_encoders is not None
        if target_encoders not in post_obj.named_synapses:
            build_decode_neuron_encoders(model, conn.post_obj, kind=target_encoders)

        mid_ax = Axon(mid_obj.n_neurons, label="encoders")
        mid_ax.target = post_obj.named_synapses[target_encoders]
        mid_ax.set_compartment_axon_map(mid_axon_inds)
        mid_obj.add_axon(mid_ax)
        model.objs[conn]["mid_axon"] = mid_ax

        post_obj.compartment.configure_filter(post_tau, dt=model.dt)

So, the specific connection object which is tripping up the assert condition has a post object that is a nengo.Ensemble. The pre object is what the connection connects from, and the post object is what the connection connects to. The specific assert statement that is failing is:

assert post_slice == slice(None)

which indicates that for NengoLoihi, it is expecting to have no slicing done on the post object. This narrows down the connections causing the issue to a connection to a nengo.Ensemble where slicing is done on the post object. I.e.,:

nengo.Connection(inp1, A[0])
nengo.Connection(inp2, A[1])

Thus, to remedy the problem (i.e., get rid of the slicing), the simplest solution is to combine the inp values into one 2D value (see the code above).

Also, when applying a function on a connection, how can I incorporate other inputs (aside from using transform to apply weights)? For example, say I wanted to apply this function across a connection from x to some other ensemble:

This question is a conceptual NEF (Neural Engineering Framework) question, and the following answer applies to Nengo as well as NengoLoihi. Using the NEF, a population of neurons can be made to “compute” a specific function by computing appropriate decoders that map the neural activity of an ensemble of neurons (which changes based on the input to the ensemble) into an output space that approximates the desired function. To that end, any function to be approximated by the ensemble has to have ALL of the function inputs represented in the neural activity of the ensemble.

To do this in (vanilla) Nengo, one would do:

ens = nengo.Ensemble(200, dimensions=2)
x_node = nengo.Node(...)
value_node = nengo.Node(...)
nengo.Connection(x_node, ens[0])
nengo.Connection(value_node, ens[1])

output = nengo.Node(size_in=1)

def my_func(x):
    return x[0] + 2*x[1]  # x[0] is x, x[1] is value
nengo.Connection(ens, output, function=my_func)

Now, this poses an issue with NengoLoihi, since (as mentioned above) nengo.Ensembles implementation on the Loihi board don’t support input slicing, and your scenario doesn’t allow for both $x$ and $value$ to come from the same node. However, we can take advantage of the transform parameter on nengo.Connections to achieve the same effect:

ens = nengo.Ensemble(200, dimensions=2)
x_node = nengo.Node(...)
value_node = nengo.Node(...)
nengo.Connection(x_node, ens, transform=[[1], [0]])
nengo.Connection(value_node, ens, transform=[[0], [1]])

output = nengo.Node(size_in=1)

def my_func(x):
    return x[0] + 2*x[1]  # x[0] is x, x[1] is value
nengo.Connection(ens, output, function=my_func)

Fun fact: This method of specifying “slices” on an input connection was the method used before we introduced “proper” slicing into the Nengo codebase. :smiley:

Hey @xchoo,

This makes perfect sense when you point to the source code. Thanks for clearly explaining it… any particular reason why slicing a post object is a bad idea when targeting Loihi? I’m guessing it has something to do with how resources get allocated in the native SDK for the board.

Gotcha. Repeating it back to you to make sure I get: If you want to approximate a multivariable function with a group of neurons, it’s necessary that all the involved variables are encoded into the spiking activity of said neurons. Excluding time as a variable to be encoded, I suppose.

Awesome workaround. I feel like I’ve stumbled upon solutions in a similar vein but I never put one and one together. Isn’t a similar technique used in the Matrix Multiplication example, but with ensemble arrays?

On a final note, will this “proper” slicing be offered in a future version of Nengo? This might be implicitly answered by your comments to the above question on why it’s not used for LoihiBlock objects (using that term right) in the first place, but I will await your answer!

Thanks for the kind and clear help.

I’m not 100% sure on this, but I believe this is because the Loihi hardware itself is not set up to handle slicing (it just does matrix operations). In (vanilla) Nengo, we put slicing in as a convenient shortcut for providing the full transformation matrix, and under the hood, I believe we just use Numpy’s array slicing functionality to do this. It might be possible to have the NengoLoihi builder object convert slices back into full transformation matrices, but we decided not to do that (didn’t want to make it too black magic, and it probably introduces edge cases we do not consider or support), and instead inform the user what is / isn’t possible to do, and let them specify their own solution.

I see… thanks for the insight!

I just noticed this comment. Yes, that is correct! I should also mention that I use the phrase “approximate a function” instead of “compute a function” because that’s a more accurate reflection of what the ensemble decoders are doing. Consider the following Nengo model:

with nengo.Network() as model:
    input_node = nengo.Node(lambda t: t - 2)
    ens = nengo.Ensemble(50, 1)
    output_node = nengo.Node(size_in=1)
    
    nengo.Connection(input_node, ens, synapse=None)
    nengo.Connection(ens, output_node, function=lambda x: x ** 2)
    p_out = nengo.Probe(output_node, synapse=0.005)

This Nengo model “computes” the square of an input, and looking at the probed output for $x$ values between -1 and 1, we get:

image

which demonstrates that the neural ensemble does a pretty good job at “computing” the $x^2$ function. However, if you provide the ensemble inputs beyond -1 to 1, you get a different result:

image

If the neural ensemble were truly computing the $x^2$ function, this would not be the case (it would give you the right output regardless of the input). Hence why I stress that the neural ensemble is only approximating any given function, rather than computing it.

This example also illustrates the importance of appropriately scaling your inputs to match what is expected of the neural ensemble. By default, any nengo.Ensemble created has a radius of 1. This means that the decoders are optimized to best approximate the output function within a hypersphere of radius 1 (centered around 0). This is why in the first plot, the output is limited from -1 to 1.

If we are expecting the input to the ensemble to be within a different range (e.g., from -2 to 2), we can take this into account by setting the radius of the ensemble**:

ens = nengo.Ensemble(50, 1, radius=2)

Now, if we plot the output of the ensemble given an input from -2 to 2, we get:
image

Eh voila! The ensemble now pretty accurately approximates $x^2$ for this new range.
**Note: Instead of setting the radius, you can also manually compensate for different ranges by modifying the transform on the input connection (and compensating for that compensation on the output connection transform). This method is more flexible and better to use in certain use cases.

However, with this new radius, we can zoom back in to the -1 to 1 range to see it’s effect:

image

Here we see that because the number of neurons have not changed, but the representation range has been increased from -1,1 to -2,2, the approximated output is substantially more noisy. This shows that it’s not only important to appropriately scale the radius of your ensemble if your expected input is greater than or less than the default radius of 1.

1 Like