Multiplying a matrix with a vector

Hello Nengo Community,

I would like to multiply a 1D vector with a Matrix. As a starting point, I am looking at the examples Squaring the input.
What I want to have is a kind of network where when a vector is fed to the network it gets multiplied by a constant matrix. Let say I have a vector y and a matrix A. What I want the network to output = A x y. The network only takes a vector y as an input. Matrix A is a complex one and the output is not complex it is absolute of (A x y).

So what I am thinking is to follow the squaring input example where I am replacing the input square function by defining a function where the input y get’s multiplied by Matrix A.
Is this a correct way to do it?

One more question related to the squaring example. Once the network learns the behaviour, how can we do the inference then? Just like we do in nengo_DL with predict function.

Thank you in advance for your answer.

Hi @Choozi,

There are two ways of multiplying a vector with a matrix in Nengo, and the different methods depend on how the matrix is being populated. Multiplying by a constant matrix is a pretty straightforward implementation, without actually needing any neurons to implement. In this case, you can perform the matrix multiplication by using the transform parameter on a nengo.Connection.

Here’s a simple example of multiplying a vector by the matrix:

[[0, 1, 0, 0]
 [0, 0, 1, 0]
 [0, 0, 0, 1]
 [1, 0, 0, 0]]

And here’s the Nengo code:

import nengo
import numpy as np

with nengo.Network() as model:
    inp = nengo.Node([1, 2, 3, 4])
    out = nengo.Node(size_in=4)

    matrix = np.matrix([[0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0]])

    nengo.Connection(inp, out, transform=matrix, synapse=None)

    p_out = nengo.Probe(out)

with nengo.Simulator(model) as sim:
    sim.run(0.01)

print(sim.data[p_out][-1])

If you run the code above, you should get the output:

[2. 3. 4. 1.]

which is the correct result if you multiply the vector [1, 2, 3, 4] with the matrix, demonstrating that you can use the transform parameter of the nengo.Connection object to do the matrix multiplication for you. But, as noted before, for this method to work, you cannot change the matrix once the network is created.

I should note, however, that the method above doesn’t make any assumptions about the complex / real nature of the matrix elements. Rather, it just assumes that all of the matrix elements are just numbers, and it is up to the Nengo user to interpret the result however they want to. If you want to do matrix multiplication with complex numbers (or vectors containing complex values), then I’d suggest splitting the vector and matrices in their respective real and complex components, and doing each matrix multiplication separately. I.e., you’ll have 4 connections, one to do each of the following multiplications:

  • real x real
  • real x complex
  • complex x real
  • complex x complex

You’ll need to sign the inputs appropriately to make sure that the math works out though (e.g., complex x complex needs a -1 factor).

Side note: we make use of both the transform method to do matrix multiplication, and dealing with real & complex numbers in our implementation of the Circular Convolution operator. This operator is available as a built-in network in Nengo.

@xchoo thank you very much for your reply.
I have worked with the code and guidelines you provided and I am able to multiply a constant complex matrix with a real vector and it is working amazingly. :slight_smile:
As a next step, I would like to multiply a real matrix with a constant complex matrix i.e: AxB where A is a constant complex matrix and B is a real matrix. One naive way would be to multiply each row of the B with Matrix A and then combine/stack the results in a matrix. Can I do it in a more efficient way?

If you wanted to multiply a variable matrix with a constant matrix, there’s no getting away from needing to perform all of those multiplications and additions. Within Nengo, the easiest way to implement this is, as you suggested, to treat the matrix multiplication as a stack of vector multiplications. Since all of the multiplications need to be computed, there really isn’t a more efficient way to do this.

Although, it should be noted that the comment above is application to general matrices. If the matrix multiplication has a specific structure (e.g., it is mirrored, etc.,), then it may be possible to take advantage of this structure in the implementation (e.g., only needing half of the multiplications for a mirrored matrix).

@xchoo thank you for your detailed reply.

Thank you for the clarification. I can do it with the transform method but this transform method is like a transformation or encoding your input, right?
But if I have to do this multiplication process with neurons? i.e: a network that learns this multiplication? Then how would I start?

That is correct. If you are multiplying a variable matrix with a fixed matrix, that can be thought of as applying a (complex) rotation matrix to the variable matrix. And since a rotation is a linear transformation, you can consider the transform on the connection a transformation as well.

If you want to do the multiplication in neurons, the only time you’d want to do that is if you are trying to multiply two variable matrices. Otherwise, you’d have a bunch of neurons performing unnecessary computation.

If you do want to preform the multiplication in neurons, what you’ll need to do is to have an ensemble for each multiplication in the matrix multiplication. I.e., if you are multiplying two N x N matrices, you’ll need N x N x N ensembles to do the multiplications. In Nengo, we have the built-in Product network that you can use to create all of these multiplication ensembles (note that you’ll need to flatten the matrices into vectors to use it). This Nengo example demonstrates how each of the product ensembles will function.

Learning this multiplication is a bit trickier, seeing as there can be potentially a lot of multiplications that need to be done. This Nengo example demonstrates how to do it for just one product, and can be extended for multi-dimensional products by increasing the dimensionality of the ensemble. However, the number of neurons you’d need increases quite rapidly, as does the training time… so it might be better to use multiple ensembles rather than just one big one. This is something you’ll need to play with though, because I haven’t done it myself.

@xchoo Thank you very much for your detailed replies and for suggesting helpful tutorials.
I have one question related to the transform method. Can we do this transform method in the nengoDL as well? For instance, as a first layer that takes an input and applies this Matrix multiplication before feeding it to subsequent convolutional layers?

Thank you in advance for your reply.

Yes! Although, in NengoDL (and in regular Nengo), the transform attribute on nengo.Connection objects can typically be only of 1 type. I.e., if you set it to a matrix, you can’t set it to be a convolution transform. To “chain” the two together, you’ll need a nengo.Node, something like this:

passthrough = nengo.Node(size_in=D, size_out=D)
nengo.Connection(layer1, passthrough, transform=<matrix>)
nengo.Connection(passthrough, layer2, transform=nengo.Convolution(...))

@xchoo Thank you for your prompt reply. :slight_smile: Will work on it and get back to you incase of any questions :slight_smile:

@xchoo Thank you again for your help. I am able to do it this way currently as shown in below and it working fine.

inp = nengo.Node(np.zeros(rows * cols))

layer1 = nengo.Node(size_in=rows * cols)
nengo.Connection(inp, layer1, transform=Matrix, synapse=None)

layer2= nengo_dl.Layer(tf.keras.layers.Conv2D(filters=32, strides=2, kernel_size=3))(layer1, shape_in=(rows, cols, 1))
layer2 = nengo_dl.Layer(neuron_type)(layer2)

In the current example, layer1 is taking the whole input that is row*col. However, what I want to do is to feed to the first layer one column or row at a time just we do in the convolutional layer. As you could see the second layer is accepting the input in shape (rows, cols,1). How will I do it for the first layer?
I hope I have made my question understandable.
Furthermore, does this connection weights between the input and layer1 get updated during training?

Jus to understand the transformation, I see the transformation simulated in nengo as shown in the figure. For instance, a vector [x,y]T multiplying with matrix [1 2; 3 4].

Transformation
Is this how this transformation works?

Thank you for your answer in advance. :slight_smile:

@xchoo

In the current example, layer1 is taking the whole input that is row*col. However, what I want to do is to feed to the first layer one column or row at a time just we do in the convolutional layer. As you could see the second layer is accepting the input in shape (rows, cols,1). How will I do it for the first layer?
I hope I have made my question understandable.

I guess I have fixed this with the following way:

let suppose row=2 and cols =2
Matrix =[1 3;
             3 4]

inp = nengo.Node(np.zeros(rows * cols))

layer1 = nengo.Node(size_in=rows * cols)
nengo.Connection(inp[0:2], layer1[0:2], transform=Matrix, synapse=None)
nengo.Connection(inp[2:4], layer1[2:4], transform=Matrix, synapse=None)

layer2= nengo_dl.Layer(tf.keras.layers.Conv2D(filters=32, strides=2, kernel_size=3))(layer1, shape_in=(rows, cols, 1))
layer2 = nengo_dl.Layer(neuron_type)(layer2)

Is this the correct way?

Furthermore, does this connection weights between the input and layer1 get updated during training?

When I output the weights of the model. It shows them as trainable weights. This means it gets updated during the training. Is there any way to make them non-trainable?

I am a little confused with what you are asking, but just to clarify, in your example, both x and y are in the same layer, correct? Now, if we call the first layer \vec{x} (instead of x and y), what you want is a connection that computes:

\vec{x}^T \times \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}

If that is the case, you can do it in one single connection, like so:

x0, x1 = (1, 2)
transform = np.matrix([[1, 2], [3, 4]])

with nengo.Network() as model:
    inp = nengo.Node([x0, x1])

    layer1 = nengo.Node(size_in=2)
    nengo.Connection(inp, layer1, transform=transform.T, synapse=None)

Note that the inp and layer1 nodes are both two-dimensional, since the number of elements in each is 2 (the input is “x” and “y”, and likewise for the output of layer1). This is in contrast to your code where the input and layer1 nodes are 4D (rows * cols).

Yes you can. In NengoDL, you can set the .trainable attribute on a connection to be true or false. The method to do this is described here.

@xchoo thank you for your prompt reply.

Sorry for creating this confusion. I was talking about input as a 4D, inp = [x1,x2,x3,x4].
I wanted to apply the transform to [x1, x2] with one matrix and then [x3, x4] to a different matrix.

let suppose row=2 and cols =2
Matrix =[1 3;
             3 4]

inp = nengo.Node(np.zeros(rows * cols))

layer1 = nengo.Node(size_in=rows * cols)
nengo.Connection(inp[0:2], layer1[0:2], transform=Matrix1, synapse=None)
nengo.Connection(inp[2:4], layer1[2:4], transform=Matrix2, synapse=None)

layer2= nengo_dl.Layer(tf.keras.layers.Conv2D(filters=32, strides=2, kernel_size=3))(layer1, shape_in=(rows, cols, 1))
layer2 = nengo_dl.Layer(neuron_type)(layer2)

Now this is correct right?

Yes you can. In NengoDL, you can set the .trainable attribute on a connection to be true or false . The method to do this is described.

I have been playing with these options however, this doesn’t work for me.

     inp = nengo.Node(np.zeros(rows * cols))
     layer1 = nengo.Node(size_in=rows * cols)
     con = nengo.Connection(inp[0:2], layer1[0:2], transform=Matrix1, synapse=None)
     net.config[con].trainable = False

AttributeError: type object ‘Connection’ has no attribute ‘trainable’

I also tried this options:

nengo_dl.configure_settings(trainable=False)

But this then does not apply the transformation.
Is there any way to solve it?

@Choozi, yup that would be correct. Just be careful about which way the matrix is. If you notice in my example code, I had to transpose the matrix to get the desired result. I would suggest you create a test network with Nengo nodes to make sure you have the connection transform give you the right result.

Right, so that’s a 2-part thing. If you want to set the .trainable option on the connections, you’ll need to first set it on the entire model, like so:

with nengo.Network() as model:
    nengo_dl.configure_settings(trainable=None)
    ....

The nengo_dl.configure_settings function call “adds” the .trainable" attribute to the various objects in the Nengo network. Note that the trainable=Noneoption tells NengoDL to use the defaulttrainableoptions for the objects in your network. If you dotrainable=False` it turns off training for the entire network.

Once you call the configure_settings function, you can then set the specific trainable option on the connection you want:

with model:
    ...
    con = nengo.Connection(...)
    net.config[con].trainable = False
    ....

So, to put it all together:

with nengo.Network() as model:
    nengo_dl.configure_settings(trainable=None)
    ...
    con = nengo.Connection(...)
    net.config[con].trainable = False
    ...

@xchoo Thank you for your detailed reply.

I would suggest you create a test network with Nengo nodes to make sure you have the connection transform give you the right result.

Yes, I did check and it is given me the correct results. Thank you.

So, to put it all together:
with nengo.Network() as model:
nengo_dl.configure_settings(trainable=None)

con = nengo.Connection(…)
net.config[con].trainable = False

Doing this then make the connection weights all set to zero. For instance if I modify the above example then:

nengo_dl.configure_settings(trainable=None)
inp = nengo.Node(np.zeros(rows * cols))
layer1 = nengo.Node(size_in=rows * cols)
con = nengo.Connection(inp[0:2], layer1[0:2], transform=Matrix1, synapse=None)
net.config[con].trainable = False

It sets the connections weight all to 0 and doesn’t apply the transform=Matrix1.

What I want is to set the connection weights as Matrix1 and then it is freezed or made nontrainable.
I hope I explained my question well.

@Choozi, I’m not able to replicate the results you are describing. Can you post a short script that will illustrate the “all-0” connection weights?

For reference, here’s a test script I created where I created two almost identical networks in NengoDL. In the first network, it’s just a regular network with not specific trainable options. The second network has trainable=False set on the connection between inp and layer1. If you run the code below, you should see that both print statements return the same set of weights (an identity matrix of dimension 10)

import nengo
import nengo_dl
import numpy as np
import tensorflow as tf

print(">> regular network")
with nengo.Network() as model:
    inp = nengo.Node(np.zeros(10))
    layer1 = nengo.Node(size_in=10)
    conn = nengo.Connection(inp, layer1, transform=np.eye(10), synapse=None)

with nengo_dl.Simulator(model) as sim:
    print(sim.model.params[conn].weights)

print(">> trainable=False\n")
with nengo.Network() as model2:
    nengo_dl.configure_settings(trainable=None)
    inp = nengo.Node(np.zeros(10))
    layer1 = nengo.Node(size_in=10)
    conn = nengo.Connection(inp, layer1, transform=np.eye(10), synapse=None)
    model2.config[conn].trainable = False

with nengo_dl.Simulator(model2) as sim2:
    print(sim2.model.params[conn].weights)

@xchoo
Thank you for your detailed answer. Now it is working.
Carrying out this further, I have implemented the same transformation with ensembles.

inp = nengo.Node(np.zeros(rows * cols))
layer1 = nengo.Ensemble(n_neurons=4, dimensions=1,neuron_type=nengo.LIF() , label=“Layer 1”)
nengo.Connection(inp[0:2], out_real [0:2], transform=Matrix, synapse=None)
nengo.Connection(inp[2:4], out_real [2:4], transform=Matrix, synapse=None)

According to my understanding, the Ensemble acts as a dense layer, right?

So the layer1 in keras would look like this:
layer1 = nengo_dl.Layer(tf.keras.layers.Dense(units=4))
Am I right?

That’s correct.

@xchoo ok! Thank you :slight_smile: