Learn 3 layers SNN in nengo using BCM

Hi, I am new to nengo and I am using BCM learning rules to calculate the weight matrix.
My aim is to classify the audio and I have created a three layer network so far, but when running I get

nengo.exceptions.ValidationError: Connection.learning_rule_type: Learning rule 'BCM(learning_rate=1e-10)' can only be applied on connections to neurons. Try setting `solver.weights` to True or connecting between two Neurons objects.

I’m not quite sure what the problem is, can you please answer? Thank you.
If you need more information please let me know.
Here is my code:

import random
import numpy as np
import matplotlib.pyplot as plt
from nengo.processes import WhiteSignal
#matplotlib inline 
import nengo
import nengo_loihi
print(nengo.BCM.__doc__)
# def build_bcm(model, bcm, rule):


with nengo.Network(label='BCM') as model:
# model = nengo.Network()
# with model:
    audio = nengo.Node(WhiteSignal(period=10420, high=5, rms=0.5), size_out=1) 

    pre = nengo.Ensemble(n_neurons=700, dimensions=1)
    hidden = nengo.Ensemble(n_neurons=473, dimensions=1)#
    nengo.Connection(audio, pre)#connect input and pre
    nengo.Connection(pre, hidden)

    conn_1 = nengo.Connection(pre, hidden, function=lambda x: 1, transform=1, learning_rule_type=nengo.BCM(learning_rate=1e-10))

    post = nengo.Ensemble(n_neurons=10, dimensions=1)
    nengo.Connection(hidden, post)
    conn_2 = nengo.Connection(hidden, post, function=lambda x: 1, transform=1, learning_rule_type=nengo.BCM(learning_rate=1e-10))
#connect hidden and post
#Add in learning
with model:
    error = nengo.Ensemble(700, dimensions=1)
    error_p = nengo.Probe(error, synapse=0.01)

    # Error = actual - target = post - pre
    nengo.Connection(post, error)
    nengo.Connection(pre, error, transform=-1)

    # Add the learning rule to the connection
    conn_1.learning_rule_type = nengo.BCM()
    conn_2.learning_rule_type = nengo.BCM()

    # Connect the error into the learning rule
    nengo.Connection(error, conn_1.learning_rule)
    nengo.Connection(error, conn_2.learning_rule)
#Probe
    audio_p = nengo.Probe(audio)
    pre_p = nengo.Probe(pre, synapse=0.01)
    hidden_p = nengo.Probe(hidden, synapse=0.01)
    post_p = nengo.Probe(post, synapse=0.01)
# Verify that it does a communication channel 


with nengo_loihi.Simulator(model) as sim:#
    sim.run(2.0)#

Hi @kkkkking, and welcome to the Nengo forums! :smiley:

In Nengo, there are two types of connections you can create: decoded connections, and non-decoded connections.

Decoded connections are NEF-style connection (you can read a summary of the NEF here if you are not familiar with it), where the connection weights between two neural populations are “factored” into multiple components: decoders, encoders, and transformation weights. Decoded connections are what you create when you create a “standard” (default) connection between nengo.Ensemble objects, i.e., by doing this:

nengo.Connection(ens1, ens2, ...)

Non-decoded connections are connections between neural populations that do not utilize the encoder/decoder/transform matrix factorization. Rather, for a non-decoded connection, the connection weights between the neural populations are just the “full” connection weight matrix (i.e., the weights between the neurons in each population). There are two ways to create a connection like this. The first way is to create a connection between the .neurons object of each nengo.Ensemble. Note that if you do this, you’ll need to specify the connection weight matrix (using the transform parameter), and this connection weight matrix should have a shape post.n_neurons x pre.n_neurons. For these connection, we typically just assign random values to the connection weights, like so:

nengo.Connection(
    ens1.neurons, 
    ens2.neurons, 
    transform=np.random.random((ens2.n_neurons, ens1.n_neurons))

You can also create “full” non-decoded connections by connecting two nengo.Ensemble objects and specifying a solver with the weights=True parameter option. Using this option allows you to create a connection that computes a predefined function, but in a way where the full connection weight matrix is provided, rather than Nengo factorizing it into the decoders, encoders, and transformation matrix:

nengo.Connection(ens1, ens2, solver=nengo.solvers.LstsqL2(weights=True))

If you read the notes about the BCM rule (see here), you’ll see that the BCM rule works on the “full” connection weight matrix, rather than the decoded connections, so you’ll need to re-configure your network to use these connections instead. Either of the two approaches (i.e., connecting to .neurons or providing a specific solver object) will work.

As an example of how you would reconfigure one of your connections:

conn_1 = nengo.Connection(
    pre, hidden, function=lambda x: 1, transform=1, 
    learning_rule_type=nengo.BCM(learning_rate=1e-10))

would be modified to:

conn_1 = nengo.Connection(
    pre, hidden, function=lambda x: 1, transform=1, 
    solver=nengo.solvers.LstsqL2(weights=True),
    learning_rule_type=nengo.BCM(learning_rate=1e-10))

or

conn_1 = nengo.Connection(
    pre.neurons, hidden.neurons, 
    transform=np.random.random((hidden.n_neurons, pre.n_neurons)),
    learning_rule_type=nengo.BCM(learning_rate=1e-10))

Thank you. I think I understand, and the code can work now.
But now I have the other problem, I don’t know how I should import my dataset into my code instead of the whitesignal.
Thank you so much!

Importing your dataset into Nengo is relatively straightforward. The nengo.Node object (which is already being used to generate the WhiteSignal signal) allows you to pass, as the first parameter, a Python function. When you do this, Nengo will execute that function for every timestep of the simulation. Here is a Nengo example that demonstrates how you would use this functionality to cycle through a set of values presented to the network at a regular interval (refer to the cycle_array function in Step 2 of the notebook).

Thank you for your reply.
Now, I am using SNN to classify audio and I tried to import the dataset but it never finished modelling in the buliding.
The ndarray size of my SHD training set after preprocessing is (8156 25 700), which is the number of training samples, the temporal dimension of the audio channels and the number of channels in the cochlear model. During training the input dimension reached a shocking 25*700, which was more than the RAM could handle. I tried to solve the problem, but there wasn’t an effective way to do it.
I think that the failure is caused by the fact that I am computing in too many dimensions or something else.
Can anyone help me with an answer?

I don’t have much experience with audio processing so my advice here will unfortunately be limited.
As I understand it, there are several preprocessing steps you can do to reduce the dimensionality of the input signals. Examples include using histograms, spectrograms or MFCCs, and then doing the classification on the processed data (or processed features).
I recommend checking out @tbekolay’s PhD thesis. He discusses these approaches in it.