Need to download excitatory only weights in nengo bio

Hi, I am trying to download the excitatory-only weights for one of the example codes in the Nengo-bio Gitlab repo.

Original code

f1, f2 = lambda t: np.sin(t), lambda t: np.cos(t)
with nengo.Network(seed=SEED) as model:
    inp_a = nengo.Node(f1)
    inp_b = nengo.Node(f2)

    ens_a = bio.Ensemble(n_neurons=101, dimensions=1, p_exc=0.8)
    ens_b = bio.Ensemble(n_neurons=102, dimensions=1, p_exc=0.8)
    ens_c = bio.Ensemble(n_neurons=103, dimensions=2)

    nengo.Connection(inp_a, ens_a)
    nengo.Connection(inp_b, ens_b)

    bio.Connection((ens_a, ens_b), ens_c)

    probe = nengo.Probe(ens_c, synapse=PROBE_SYNAPSE)

run_and_plot(model, probe, (f1, f2));


Modified Code

f1 = lambda t: np.sin(t)
with nengo.Network(seed=SEED) as model:
    inp_a = nengo.Node(f1)

    ens_a = bio.Ensemble(n_neurons=100, dimensions=1, p_exc=0.8)
    ens_b = bio.Ensemble(n_neurons=100, dimensions=1)

    nengo.Connection(inp_a, ens_a)

    conn =bio.Connection((ens_a), ens_b)

    probe = nengo.Probe(ens_b, synapse=PROBE_SYNAPSE)

run_and_plot(model, probe, [f1]);

Used this to print the weights

with nengo.Simulator(model) as sim:
    weights = sim.data[conn].weights

OUTPUT

 
{excitatory: array([[0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       ...,
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.],
       [0., 0., 0., ..., 0., 0., 0.]]), inhibitory: array([[-0.01486258, -0.02687033, -0.00252252, ..., -0.00687898,
        -0.02729088, -0.05221793],
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       [-0.00805202, -0.01390778, -0.00194837, ..., -0.00509022,
        -0.01523685, -0.02775859],
       ...,
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       [-0.01025195, -0.0180663 , -0.00217762, ..., -0.00552601,
        -0.01916261, -0.03561765],
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ]])}

Can someone help me to understand what is going wrong and why it is printing only zeros

Weird, your code looks fine. I’m getting the following result:

{excitatory: array([[6.98063247e-11, 1.47438180e-10, 6.10179274e-09, ...,
        8.99905493e-12, 8.51779442e-11, 4.46504081e-09],
       [8.26611904e-03, 3.47253895e-03, 0.00000000e+00, ...,
        7.62583035e-04, 1.36468285e-10, 0.00000000e+00],
       [4.90397576e-09, 2.76148611e-09, 7.14016286e-10, ...,
        2.73287314e-11, 5.44249547e-09, 3.55275516e-08],
       ...,
       [1.64995079e-09, 1.06325106e-09, 5.86287891e-02, ...,
        4.17286284e-03, 1.88940638e-09, 1.80326775e-08],
       [0.00000000e+00, 1.62606218e-03, 1.78731599e-02, ...,
        1.58772615e-02, 0.00000000e+00, 1.49406512e-10],
       [0.00000000e+00, 0.00000000e+00, 0.00000000e+00, ...,
        0.00000000e+00, 0.00000000e+00, 0.00000000e+00]]), inhibitory: array([[-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       ...,
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       [-0.        , -0.        , -0.        , ..., -0.        ,
        -0.        , -0.        ],
       [-0.05245011, -0.01489964, -0.07974507, ..., -0.00375597,
        -0.0680269 , -0.05365369]])}

Maybe something is wrong with your installation? Are you using the newest version of nengo-bio (I originally released nengo-bio on PyPi, but never got arround to publishing a new version there; make sure to download the newest version from GitHub directly). Also make sure to install libnlif; the non-libnlif code-path is not tested.

Sorry for all of this being a little wonky; nengo-bio is just a private project of mine that fell out of my PhD research and neither ABR nor me are actively maintaining it at the moment.

Thank you so much.
I reinstalled everything and it seems like everything is fine, following are the simulation results of the same code I mentioned earlier.
But the weights I got are different from yours. I think it’s because of the seed value and these are the parameters used

PROBE_SYNAPSE = 0.1 # Filter to be used for the network output
T = 10.0 # Total simulation time
T_SKIP = 1.0 # Time to exclude from the RMSE
SEED = 4891 # Network seed
SS = 100 # Plot subsample

Results

Build finished in 0:00:01.                                                      
{excitatory: array([[0.00000000e+00, 1.50066926e-10, 1.96727218e-11, ...,
        0.00000000e+00, 1.24876984e-10, 0.00000000e+00],
       [0.00000000e+00, 0.00000000e+00, 3.27612261e-10, ...,
        0.00000000e+00, 2.28716326e-10, 0.00000000e+00],
       [0.00000000e+00, 0.00000000e+00, 3.14549615e-10, ...,
        0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
       ...,
       [0.00000000e+00, 0.00000000e+00, 3.64753745e-10, ...,
        0.00000000e+00, 0.00000000e+00, 0.00000000e+00],
       [0.00000000e+00, 7.80243936e-10, 6.94371222e-12, ...,
        1.27971696e-11, 5.48846873e-11, 0.00000000e+00],
       [1.20150793e-09, 1.18120130e-08, 1.34191210e-11, ...,
        1.60849070e-10, 0.00000000e+00, 0.00000000e+00]]), inhibitory: array([[-0., -0., -0., ..., -0., -0., -0.],
       [-0., -0., -0., ..., -0., -0., -0.],
       [-0., -0., -0., ..., -0., -0., -0.],
       ...,
       [-0., -0., -0., ..., -0., -0., -0.],
       [-0., -0., -0., ..., -0., -0., -0.],
       [-0., -0., -0., ..., -0., -0., -0.]])}

Is this right or please correct me?

Hm, the weights you are getting do not look quite right from what I can see (though this depends on the exact function you are trying to compute). Typically, there should be some weights in the 10^-4 to 10^-2 range.

Hi, @astoecke I reinstalled everything but I am facing the same issue. But I want to explain to you what I am doing actually.

I am doing a research project where we are trying to implement the Spiking Neural Network with CMOS technology. We are using nengo bio to generate 3-bit data for our circuit. I was using one of your examples to do that and could print Excitatory only weights.

For the next step, I have to normalize the weights and run the model with normalized weights and get 3-bit data for our circuit.
I have been through a few discussions on the forum and could not find any relevant method for normalization.

Could you please let me know if there is any method to do this in nengo bio.

Hi Akhil,

thank you for elaborating on your use-case!

Technically downloading the weights from nengo-bio should work just as you envisioned; it is just hard for me to debug the problem without being able to reproduce it on my end. We can dig into this in more detail, but I’m not sure that this is the right way forward for you (see below).

Specifically, I don’t think that the weights computed by NengoBio will work well with 3 bit quantization; the dynamic range of the solution is simply too large. You will most likely need a specialised solver that takes quantisation into account.

For that, I would recommend you simply solve for the connection weights yourself.
Specifically, Nengo Bio solves eqs. 3.11 and 3.12 in my thesis (see Harnessing Neural Dynamics as a Computational Resource, p. 74)

The steps to do that are the following:

  • Sample the pre-population activities A_i over the range of values x you want to compute functions over
  • Compute the currents J_i(f(x)) that we need to inject into the post-population to compute f(x)
  • Mark specific pre-population neurons as excitatory/inhibitory
  • Sort the pre-population activities into a matrix A^+ and A^- (i.e., an excitatory and an inhibitory pre-population activity matrix)
  • Solve a non-negative least squares problem to obtain the weights that optimally recombine the pre-population activities to give you the desired target current.
  • Manually load the weights into Nengo; set the gains an biases of the target population to one/zero, respectively because we’re decoding the gains and biases from the pre-population (this is the most difficult step; historically, the gains and biases are deeply ingrained into how Nengo works)

NengoBio normally does all of that for you; but I fear that you will have to heavily adapt the optimization part (i.e., take quantization into account) to get things working properly in your final application.

Here is some simple code that does all of the above steps in vanilla Nengo, and that should be easy to adapt to your quantised application:

import nengo
import numpy as np
import scipy.optimize

import matplotlib.pyplot as plt

N_pre = 101
N_post = 102
N_smpls = 10001
f = lambda x: x * x # Compute the square
#f = lambda x: x # Compute a communication channel
xs = np.linspace(-1, 1, N_smpls).reshape(-1, 1)

def remove_bias_current(sim, ens):
    # Some fun fiddling around with Nengo-internal data-structures
    signals = sim.signals
    for signal in signals:
        if signal.name == str(ens) + ".bias":
            signal._initial_value.setflags(write=True)
            signal._initial_value[...] = 0.0


def compute_J(sim, ens, xs):
    data = sim.data[ens]
    return data.gain * (xs @ data.encoders.T) + data.bias


def compute_A(sim, ens, Js): # "sim" is not needed, but makes the code look more symmetric
    return ens.neuron_type.rates(Js, gain=np.ones(ens.n_neurons), bias=np.zeros(ens.n_neurons))


with nengo.Network() as model:
    ens_pre = nengo.Ensemble(N_pre, 1, max_rates=nengo.dists.Uniform(50, 100), label="pre")
    ens_post = nengo.Ensemble(N_post, 1, max_rates=nengo.dists.Uniform(50, 100), label="post")

    nd_in = nengo.Node(lambda t: t / 10 - 1)
    nengo.Connection(nd_in, ens_pre)
    con_direct = nengo.Connection(ens_pre.neurons, ens_post.neurons, transform=np.zeros((N_post, N_pre)))

    probe_pre = nengo.Probe(ens_pre, synapse=0.1)
    probe_post = nengo.Probe(ens_post, synapse=0.1)


with nengo.Simulator(model, optimize=False) as sim: # Need to disable optimization to be able to access the biases
    # Compute the currents that we expect to be injected into the pre-population when we
    # represent x
    J_pre = compute_J(sim, ens_pre, xs)
    A_pre = compute_A(sim, ens_pre, J_pre)

    # Compute the current we need to inject into the post population when we compute
    # f(x)
    J_post = compute_J(sim, ens_post, f(xs))
    A_post = compute_A(sim, ens_post, J_post) # Don't really need this, but useful for debugging

    # Mark some pre-neurons as excitatory, and others as inhibitory
    is_excitatory = np.random.choice([False, True], p=[0.3, 0.7], size=(N_pre,))
    is_inhibitory = ~is_excitatory

    # Use the lines below to deactivate Dale's principle
    #is_excitatory = np.ones(N_pre, dtype=bool)
    #is_inhibitory = np.ones(N_pre, dtype=bool)

    # Split the pre-activities into an excitatory and an inhibitory matrix
    A_pre_exc = A_pre[:, is_excitatory]
    A_pre_inh = A_pre[:, is_inhibitory]

    # Solve the NNLS problem (see p. 74 of http://hdl.handle.net/10012/17850)
    # for each post-neuron individually
    A = np.concatenate((A_pre_exc, -A_pre_inh), axis=1)    
    N_pre_total = A.shape[1] # Different from N_pre if neurons are marked both as excitatory and inhibitory
    sigma = 1.0 # Regularisation factor
    I_reg = N_smpls * sigma * sigma * np.eye(N_pre_total)
    W = np.zeros((N_post, N_pre_total)) # transposed compared to the thesis
    for i in range(N_post):
        W[i] = scipy.optimize.nnls(A.T @ A + I_reg, A.T @ J_post[:, i])[0]

    # Reconstruct a weight matrix of shape N_post, N_pre; the weight matrix
    # computed above has all excitatory and inhibitory pre-neurons in
    # separate blocks; we need to sort them back to the order they are
    # in the nengo pre-population
    W_reorg = np.zeros((N_post, N_pre))
    i_exc, i_inh = 0, np.sum(1 * is_excitatory)
    for i in range(N_pre):
        if is_excitatory[i]:
            W_reorg[:, i] += W[:, i_exc]
            i_exc += 1
        if is_inhibitory[i]:
            W_reorg[:, i] -= W[:, i_inh]
            i_inh += 1

    # Forcefully shove these weights into the Nengo simulator; reset the
    # biases; divide by the gain (can't figure out where to reset the gain internally)
    sim.data[con_direct].weights.setflags(write=True)
    sim.data[con_direct].weights[...] = W_reorg / sim.data[ens_post].gain[:, None]
    remove_bias_current(sim, ens_post)

    # Run the simulation!
    sim.run(20.0)

    # Plot the results
    ts = sim.trange()
    xs_pre = sim.data[probe_pre]
    xs_post = sim.data[probe_post]
    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].plot(ts, xs_pre)
    axs[0].plot(ts, ts / 10.0 - 1.0, 'k--')
    axs[0].set_xlabel("Time $t$")
    axs[0].set_ylabel("Decoded value $x$")
    axs[0].set_title("Pre-population")

    axs[1].plot(ts, xs_post)
    axs[1].plot(ts, f(ts / 10.0 - 1.0), 'k--')
    axs[1].set_xlabel("Time $t$")
    axs[1].set_ylabel("Decoded value $x$")
    axs[1].set_title("Post-population")

    # Extract and plot the excitatory and inhibitory weights
    W_exc = W_reorg[:, is_excitatory]
    W_inh = W_reorg[:, is_inhibitory]

    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].imshow(W_exc, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[0].set_xlabel("Exc. pre-neuron index")
    axs[0].set_ylabel("Post-neuron index")
    axs[0].set_title("Excitatory weights")

    axs[1].imshow(W_inh, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[1].set_xlabel("Inh. pre-neuron index")
    axs[1].set_ylabel("Post-neuron index")
    axs[1].set_title("Inhibitory weights")

Edit: Fixed the above code; resetting the gains and biases didn’t work as expected, should be good now

Thank you so much for the explanation and the code @astoecke
Actually,I am trying to find out the gain that you mention here

    # biases; divide by the gain (can't figure out where to reset the gain internally)
    sim.data[con_direct].weights.setflags(write=True)
    sim.data[con_direct].weights[...] = W_reorg / sim.data[ens_post].gain[:, None]
    remove_bias_current(sim, ens_post)

For that, we need neuron properties, so,

  1. using PES supervised learning, tried to save the weights, doubling them in model 1 and running those double weights in the second models
  2. to observe the response of the neuron Iam using tuning curves.

I have been trying to implement the PES learning for the above code you mentioned, and I have a few doubts.

  1. I am not getting the how error population and connection works
  2. confused with the error node. Can you explain more about the error population?
  3. In model 2, I am getting an error when trying to connect Pre neurons to post-ensemble.
  4. I am also confused about what to mention in the function below
nengo.Connection(ens_pre, err, transform=-1, function=lambda x: x)

Here is the code which Iam trying to implement

import nengo
import numpy as np
import scipy.optimize

import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from nengo.dists import Choice
from nengo.utils.ensemble import response_curves, tuning_curves

N_pre = 101
N_post = 102
N_smpls = 10001
simtime = 20 
f = lambda x: x * x # Compute the square
#f = lambda x: x # Compute a communication channel
xs = np.linspace(-1, 1, N_smpls).reshape(-1, 1)

def remove_bias_current(sim, ens):
    # Some fun fiddling around with Nengo-internal data-structures
    signals = sim.signals
    for signal in signals:
        if signal.name == str(ens) + ".bias":
            signal._initial_value.setflags(write=True)
            signal._initial_value[...] = 0.0


def compute_J(sim, ens, xs):
    data = sim.data[ens]
    return data.gain * (xs @ data.encoders.T) + data.bias


def compute_A(sim, ens, Js): # "sim" is not needed, but makes the code look more symmetric
    return ens.neuron_type.rates(Js, gain=np.ones(ens.n_neurons), bias=np.zeros(ens.n_neurons))


with nengo.Network() as model:
    ens_pre = nengo.Ensemble(N_pre, 1, radius=2, max_rates=nengo.dists.Uniform(50, 100), label="pre")
    ens_post = nengo.Ensemble(N_post, 1, radius=2, max_rates=nengo.dists.Uniform(50, 100), label="post")

    nd_in = nengo.Node(lambda t: t / 10 - 1)
    nengo.Connection(nd_in, ens_pre)
    
    # Set up learning rule
    con_direct = nengo.Connection(ens_pre.neurons, ens_post.neurons, transform=np.zeros((N_post, N_pre)))
    con_direct.learning_rule_type = nengo.PES()
    
    # Create error population and connections
    err = nengo.Node(size_in=1)
    nengo.Connection(ens_post, err)
    nengo.Connection(ens_pre, err, transform=-1, function=lambda x: x)
    nengo.Connection(err, con_direct.learning_rule)

    # Add probes
    probe_in = nengo.Probe(nd_in)
    probe_pre = nengo.Probe(ens_pre, synapse=0.1)
    probe_post = nengo.Probe(ens_post, synapse=0.1)
    probe_err = nengo.Probe(err, synapse=0.1)

    # Add probe for connection weights (decoders)
    probe_weights = nengo.Probe(con_direct, "weights", sample_every=simtime)

with nengo.Simulator(model, optimize=False) as sim: 
    
    eval_points, activities = tuning_curves(ens_pre, sim)
    eval_points_post, activities_post = tuning_curves(ens_post, sim)
    
    # Need to disable optimization to be able to access the biases
    # Compute the currents that we expect to be injected into the pre-population when we
    # represent x
    J_pre = compute_J(sim, ens_pre, xs)
    A_pre = compute_A(sim, ens_pre, J_pre)

    # Compute the current we need to inject into the post population when we compute
    # f(x)
    J_post = compute_J(sim, ens_post, f(xs))
    A_post = compute_A(sim, ens_post, J_post) # Don't really need this, but useful for debugging

    # Mark some pre-neurons as excitatory, and others as inhibitory
    is_excitatory = np.random.choice([False, True], p=[0.3, 0.7], size=(N_pre,))
    is_inhibitory = ~is_excitatory

    # Use the lines below to deactivate Dale's principle
    #is_excitatory = np.ones(N_pre, dtype=bool)
    #is_inhibitory = np.ones(N_pre, dtype=bool)

    # Split the pre-activities into an excitatory and an inhibitory matrix
    A_pre_exc = A_pre[:, is_excitatory]
    A_pre_inh = A_pre[:, is_inhibitory]

    # Solve the NNLS problem (see p. 74 of http://hdl.handle.net/10012/17850)
    # for each post-neuron individually
    A = np.concatenate((A_pre_exc, -A_pre_inh), axis=1)    
    N_pre_total = A.shape[1] # Different from N_pre if neurons are marked both as excitatory and inhibitory
    sigma = 1.0 # Regularisation factor
    I_reg = N_smpls * sigma * sigma * np.eye(N_pre_total)
    W = np.zeros((N_post, N_pre_total)) # transposed compared to the thesis
    for i in range(N_post):
        W[i] = scipy.optimize.nnls(A.T @ A + I_reg, A.T @ J_post[:, i])[0]

    # Reconstruct a weight matrix of shape N_post, N_pre; the weight matrix
    # computed above has all excitatory and inhibitory pre-neurons in
    # separate blocks; we need to sort them back to the order they are
    # in the nengo pre-population
    W_reorg = np.zeros((N_post, N_pre))
    i_exc, i_inh = 0, np.sum(1 * is_excitatory)
    for i in range(N_pre):
        if is_excitatory[i]:
            W_reorg[:, i] += W[:, i_exc]
            i_exc += 1
        if is_inhibitory[i]:
            W_reorg[:, i] -= W[:, i_inh]
            i_inh += 1

    # Forcefully shove these weights into the Nengo simulator; reset the
    # biases; divide by the gain (can't figure out where to reset the gain internally)
    sim.data[con_direct].weights.setflags(write=True)
    sim.data[con_direct].weights[...] = W_reorg / sim.data[ens_post].gain[:, None]
    remove_bias_current(sim, ens_post)


    # Run the simulation!
    sim.run(simtime)

    # Save the learned decoders 
    weights = sim.data[probe_weights][-1] 
    print(weights)
    weights = weights*2
    print("squared weights")
    print(weights)

    # Plot the results
    ts = sim.trange()
    xs_pre = sim.data[probe_pre]
    xs_post = sim.data[probe_post]
    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].plot(ts, xs_pre)
    axs[0].plot(ts, ts / 10.0 - 1.0, 'k--')
    axs[0].set_xlabel("Time $t$")
    axs[0].set_ylabel("Decoded value $x$")
    axs[0].set_title("Pre-population")

    axs[1].plot(ts, xs_post)
    axs[1].plot(ts, f(ts / 10.0 - 1.0), 'k--')
    axs[1].set_xlabel("Time $t$")
    axs[1].set_ylabel("Decoded value $x$")
    axs[1].set_title("Post-population")

    # Extract and plot the excitatory and inhibitory weights
    W_exc = W_reorg[:, is_excitatory]
    W_inh = W_reorg[:, is_inhibitory]
    
    print(W_exc)
    print(W_inh)
    
    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].imshow(W_exc, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[0].set_xlabel("Exc. pre-neuron index")
    axs[0].set_ylabel("Post-neuron index")
    axs[0].set_title("Excitatory weights")

    axs[1].imshow(W_inh, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[1].set_xlabel("Inh. pre-neuron index")
    axs[1].set_ylabel("Post-neuron index")
    axs[1].set_title("Inhibitory weights")

    plt.figure()
    plt.plot(eval_points, activities)
    # We could have alternatively shortened this to
    # plt.plot(*tuning_curves(ens_1d, sim))
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")

    plt.figure()
    plt.plot(eval_points_post, activities_post)
    # We could have alternatively shortened this to
    # plt.plot(*tuning_curves(ens_1d, sim))
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")

# Model with loaded weights and no learning
with nengo.Network() as model2:
    ens_pre = nengo.Ensemble(N_pre, 1, max_rates=nengo.dists.Uniform(50, 100), label="pre")
    ens_post = nengo.Ensemble(N_post, 1, max_rates=nengo.dists.Uniform(50, 100), label="post")

    nd_in = nengo.Node(lambda t: t / 10 - 1)
    nengo.Connection(nd_in, ens_pre)
    con_direct = nengo.Connection(ens_pre.neurons, ens_post.neurons, transform=weights)

    probe_in = nengo.Probe(nd_in)
    probe_pre = nengo.Probe(ens_pre, synapse=0.1)
    probe_post = nengo.Probe(ens_post, synapse=0.1)


with nengo.Simulator(model2, optimize=False) as sim2: 
    
    eval_points, activities = tuning_curves(ens_pre, sim2)
    eval_points_post, activities_post = tuning_curves(ens_post, sim2)
    # Need to disable optimization to be able to access the biases
    # Compute the currents that we expect to be injected into the pre-population when we
    # represent x
    J_pre = compute_J(sim2, ens_pre, xs)
    A_pre = compute_A(sim2, ens_pre, J_pre)

    # Compute the current we need to inject into the post population when we compute
    # f(x)
    J_post = compute_J(sim2, ens_post, f(xs))
    A_post = compute_A(sim2, ens_post, J_post) # Don't really need this, but useful for debugging

    # Mark some pre-neurons as excitatory, and others as inhibitory
    is_excitatory = np.random.choice([False, True], p=[0.3, 0.7], size=(N_pre,))
    is_inhibitory = ~is_excitatory

    # Use the lines below to deactivate Dale's principle
    #is_excitatory = np.ones(N_pre, dtype=bool)
    #is_inhibitory = np.ones(N_pre, dtype=bool)

    # Split the pre-activities into an excitatory and an inhibitory matrix
    A_pre_exc = A_pre[:, is_excitatory]
    A_pre_inh = A_pre[:, is_inhibitory]

    # Solve the NNLS problem (see p. 74 of http://hdl.handle.net/10012/17850)
    # for each post-neuron individually
    A = np.concatenate((A_pre_exc, -A_pre_inh), axis=1)    
    N_pre_total = A.shape[1] # Different from N_pre if neurons are marked both as excitatory and inhibitory
    sigma = 1.0 # Regularisation factor
    I_reg = N_smpls * sigma * sigma * np.eye(N_pre_total)
    W = np.zeros((N_post, N_pre_total)) # transposed compared to the thesis
    for i in range(N_post):
        W[i] = scipy.optimize.nnls(A.T @ A + I_reg, A.T @ J_post[:, i])[0]

    # Reconstruct a weight matrix of shape N_post, N_pre; the weight matrix
    # computed above has all excitatory and inhibitory pre-neurons in
    # separate blocks; we need to sort them back to the order they are
    # in the nengo pre-population
    W_reorg = np.zeros((N_post, N_pre))
    i_exc, i_inh = 0, np.sum(1 * is_excitatory)
    for i in range(N_pre):
        if is_excitatory[i]:
            W_reorg[:, i] += W[:, i_exc]
            i_exc += 1
        if is_inhibitory[i]:
            W_reorg[:, i] -= W[:, i_inh]
            i_inh += 1

    # Forcefully shove these weights into the Nengo simulator; reset the
    # biases; divide by the gain (can't figure out where to reset the gain internally)
    sim2.data[con_direct].weights.setflags(write=True)
    sim2.data[con_direct].weights[...] = W_reorg / sim2.data[ens_post].gain[:, None]
    remove_bias_current(sim2, ens_post)

    # Run the simulation!
    sim2.run(simtime)

    # Plot the results
    ts = sim2.trange()
    xs_pre = sim2.data[probe_pre]
    xs_post = sim2.data[probe_post]
    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].plot(ts, xs_pre)
    axs[0].plot(ts, ts / 10.0 - 1.0, 'k--')
    axs[0].set_xlabel("Time $t$")
    axs[0].set_ylabel("Decoded value $x$")
    axs[0].set_title("Pre-population")

    axs[1].plot(ts, xs_post)
    axs[1].plot(ts, f(ts / 10.0 - 1.0), 'k--')
    axs[1].set_xlabel("Time $t$")
    axs[1].set_ylabel("Decoded value $x$")
    axs[1].set_title("Post-population")

    # Extract and plot the excitatory and inhibitory weights
    W_exc = W_reorg[:, is_excitatory]
    W_inh = W_reorg[:, is_inhibitory]

    print(W_exc)    
    print(W_inh) 

    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].imshow(W_exc, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[0].set_xlabel("Exc. pre-neuron index")
    axs[0].set_ylabel("Post-neuron index")
    axs[0].set_title("Excitatory weights")

    axs[1].imshow(W_inh, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[1].set_xlabel("Inh. pre-neuron index")
    axs[1].set_ylabel("Post-neuron index")
    axs[1].set_title("Inhibitory weights")
    
    plt.figure()
    plt.plot(eval_points, activities)
    # We could have alternatively shortened this to
    # plt.plot(*tuning_curves(ens_1d, sim))
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")

    plt.figure()
    plt.plot(eval_points_post, activities_post)
    # We could have alternatively shortened this to
    # plt.plot(*tuning_curves(ens_1d, sim))
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")

I am just using your code and changing things. Please correct me if I am doing anything wrong.

Thanks

Hi Akhil,

good to hear that you found my code useful! Unfortunately, I’m not entirely sure what you are trying to do, maybe you can elaborate a little.

A few thoughts from my side:

  • The gains and biases are unfortunately baked into Nengo’s core code. The code I’ve posted above jumps through a lot of hoops to get rid of them in the final network; I’m only using the gain and bias to define the tuning curves of the neurons when solving for weights. I’m not sure what you mean by “I’m trying to find out the gain”.
  • The PES learning rule will not help you to find the gain.
  • The PES learning rule is used for online learning, i.e., building models that learn synaptic weights as they are being executed. To improve over time, the network needs an internal error signal. The code I wrote optimally solves for the weights ahead-of-time, so we don’t need to do any online learining.
  • How to do online learning with the excitatory/inhibitory weight split will is still an open research question, so if doing that is your goal, I don’t have any good advice.

I hope that this helps; I think that there must be a much simpler solution for what you are ultimately trying to do :slight_smile:

Thank you so much for the explanation @astoecke. It helped me to understand my mistake.
Actually, there is no need for PES learning as the code is already solving the weights and I wanted to save the weights, scale them and produce them in the next run.
The following code depicts what I am trying to do.

import nengo
import numpy as np
import scipy.optimize

import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from nengo.dists import Choice
from nengo.utils.ensemble import response_curves, tuning_curves

N_pre = 101
N_post = 102
N_smpls = 10001
SEED = 4891
simtime = 20 
f = lambda x: x * x # Compute the square
#f = lambda x: x # Compute a communication channel
xs = np.linspace(-1, 1, N_smpls).reshape(-1, 1)

def remove_bias_current(sim, ens):
    # Some fun fiddling around with Nengo-internal data-structures
    signals = sim.signals
    for signal in signals:
        if signal.name == str(ens) + ".bias":
            signal._initial_value.setflags(write=True)
            signal._initial_value[...] = 0.0


def compute_J(sim, ens, xs):
    data = sim.data[ens]
    return data.gain * (xs @ data.encoders.T) + data.bias


def compute_A(sim, ens, Js): # "sim" is not needed, but makes the code look more symmetric
    return ens.neuron_type.rates(Js, gain=np.ones(ens.n_neurons), bias=np.zeros(ens.n_neurons))


with nengo.Network(seed=SEED) as model:
    ens_pre= nengo.Ensemble(N_pre, 1, max_rates=nengo.dists.Uniform(50, 100), label="pre")
    ens_post = nengo.Ensemble(N_post, 1, max_rates=nengo.dists.Uniform(50, 100), label="post")

    nd_in = nengo.Node(lambda t: t / 10 - 1)
    nengo.Connection(nd_in, ens_pre)
    con_direct = nengo.Connection(ens_pre.neurons, ens_post.neurons, transform=np.zeros((N_post, N_pre)))

    probe_pre = nengo.Probe(ens_pre, synapse=0.1)
    probe_post = nengo.Probe(ens_post, synapse=0.1)


with nengo.Simulator(model, optimize=False) as sim: # Need to disable optimization to be able to access the biases
    # Compute the currents that we expect to be injected into the pre-population when we
    # represent x

    eval_points, activities = tuning_curves(ens_pre, sim)
    eval_points_post, activities_post = tuning_curves(ens_post, sim)

    J_pre = compute_J(sim, ens_pre, xs)
    A_pre = compute_A(sim, ens_pre, J_pre)

    # Compute the current we need to inject into the post population when we compute
    # f(x)
    J_post = compute_J(sim, ens_post, f(xs))
    A_post = compute_A(sim, ens_post, J_post) # Don't really need this, but useful for debugging

    # Mark some pre-neurons as excitatory, and others as inhibitory
    is_excitatory = np.random.choice([False, True], p=[0.3, 0.7], size=(N_pre,))
    is_inhibitory = ~is_excitatory

    # Use the lines below to deactivate Dale's principle
    #is_excitatory = np.ones(N_pre, dtype=bool)
    #is_inhibitory = np.ones(N_pre, dtype=bool)

    # Split the pre-activities into an excitatory and an inhibitory matrix
    A_pre_exc = A_pre[:, is_excitatory]
    A_pre_inh = A_pre[:, is_inhibitory]

    # Solve the NNLS problem (see p. 74 of http://hdl.handle.net/10012/17850)
    # for each post-neuron individually
    A = np.concatenate((A_pre_exc, -A_pre_inh), axis=1)    
    N_pre_total = A.shape[1] # Different from N_pre if neurons are marked both as excitatory and inhibitory
    sigma = 1.0 # Regularisation factor
    I_reg = N_smpls * sigma * sigma * np.eye(N_pre_total)
    W = np.zeros((N_post, N_pre_total)) # transposed compared to the thesis
    for i in range(N_post):
        W[i] = scipy.optimize.nnls(A.T @ A + I_reg, A.T @ J_post[:, i])[0]

    # Reconstruct a weight matrix of shape N_post, N_pre; the weight matrix
    # computed above has all excitatory and inhibitory pre-neurons in
    # separate blocks; we need to sort them back to the order they are
    # in the nengo pre-population
    W_reorg = np.zeros((N_post, N_pre))
    i_exc, i_inh = 0, np.sum(1 * is_excitatory)
    for i in range(N_pre):
        if is_excitatory[i]:
            W_reorg[:, i] += W[:, i_exc]
            i_exc += 1
        if is_inhibitory[i]:
            W_reorg[:, i] -= W[:, i_inh]
            i_inh += 1

    # Forcefully shove these weights into the Nengo simulator; reset the
    # biases; divide by the gain (can't figure out where to reset the gain internally)
    sim.data[con_direct].weights.setflags(write=True)
    sim.data[con_direct].weights[...] = W_reorg / sim.data[ens_post].gain[:, None]
    remove_bias_current(sim, ens_post)

    # Run the simulation!
    sim.run(20.0)
    weights_1 = W_reorg
    print("saved weights")
    print(weights_1)
    

    # Plot the results     
    ts = sim.trange()
    xs_pre = sim.data[probe_pre]
    xs_post = sim.data[probe_post]
   
    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].plot(ts, xs_pre)
    axs[0].plot(ts, ts / 10.0 - 1.0, 'k--')
    axs[0].set_xlabel("Time $t$")
    axs[0].set_ylabel("Decoded value $x$")
    axs[0].set_title("Pre-population")

    axs[1].plot(ts, xs_post)
    axs[1].plot(ts, f(ts / 10.0 - 1.0), 'k--')
    axs[1].set_xlabel("Time $t$")
    axs[1].set_ylabel("Decoded value $x$")
    axs[1].set_title("Post-population")

    # Extract and plot the excitatory and inhibitory weights
    W_exc = weights_1[:, is_excitatory]
    W_inh = weights_1[:, is_inhibitory]
 

    scaled_weights = weights_1*2
    print("scaled weights")
    print(scaled_weights)
   
    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].imshow(W_exc, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[0].set_xlabel("Exc. pre-neuron index")
    axs[0].set_ylabel("Post-neuron index")
    axs[0].set_title("Excitatory weights")

    axs[1].imshow(W_inh, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[1].set_xlabel("Inh. pre-neuron index")
    axs[1].set_ylabel("Post-neuron index")
    axs[1].set_title("Inhibitory weights")

    plt.figure()
    plt.plot(eval_points, activities)
    # We could have alternatively shortened this to
    # plt.plot(*tuning_curves(ens_1d, sim))
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")

    plt.figure()
    plt.plot(eval_points_post, activities_post)
    # We could have alternatively shortened this to
    # plt.plot(*tuning_curves(ens_1d, sim))
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")
    

  with nengo.Network(seed=SEED) as model2:
    ens_pre_2 = nengo.Ensemble(N_pre, 1, max_rates=nengo.dists.Uniform(50, 100), label="pre")
    ens_post_2 = nengo.Ensemble(N_post, 1, max_rates=nengo.dists.Uniform(50, 100), label="post")

    nd_in_2 = nengo.Node(lambda t: t / 10 - 1)
    nengo.Connection(nd_in_2, ens_pre_2)
    con_direct_2 = nengo.Connection(ens_pre_2.neurons, ens_post_2.neurons, transform = scaled_weights)
    
    probe_pre_2 = nengo.Probe(ens_pre_2, synapse=0.1)
    probe_post_2= nengo.Probe(ens_post_2, synapse=0.1)


with nengo.Simulator(model2, optimize=False) as sim2: # Need to disable optimization to be able to access the biases
    # Compute the currents that we expect to be injected into the pre-population when we
    # represent x
 
    eval_points_2, activities_2 = tuning_curves(ens_pre_2, sim2)
    eval_points_post_2, activities_post_2 = tuning_curves(ens_post_2, sim2)     

    J_pre_2 = compute_J(sim2, ens_pre_2, xs)
    A_pre_2= compute_A(sim2, ens_pre_2, J_pre_2)

    # Compute the current we need to inject into the post population when we compute
    # f(x)
    J_post_2 = compute_J(sim2, ens_post_2, f(xs))
    A_post_2 = compute_A(sim2, ens_post_2, J_post_2) # Don't really need this, but useful for debugging

    # Mark some pre-neurons as excitatory, and others as inhibitory
    is_excitatory_2 = np.random.choice([False, True], p=[0.3, 0.7], size=(N_pre,))
    is_inhibitory_2 = ~is_excitatory_2

    # Use the lines below to deactivate Dale's principle
    #is_excitatory = np.ones(N_pre, dtype=bool)
    #is_inhibitory = np.ones(N_pre, dtype=bool)

    # Split the pre-activities into an excitatory and an inhibitory matrix
    A_pre_exc_2 = A_pre_2[:, is_excitatory_2]
    A_pre_inh_2 = A_pre_2[:, is_inhibitory_2]

    # Solve the NNLS problem (see p. 74 of http://hdl.handle.net/10012/17850)
    # for each post-neuron individually
    A_2 = np.concatenate((A_pre_exc_2, -A_pre_inh_2), axis=1)    
    N_pre_total_2 = A_2.shape[1] # Different from N_pre if neurons are marked both as excitatory and inhibitory
    sigma = 1.0 # Regularisation factor
    I_reg_2 = N_smpls * sigma * sigma * np.eye(N_pre_total_2)
    W_2 = np.zeros((N_post, N_pre_total_2)) # transposed compared to the thesis
    for i in range(N_post):
        W_2[i] = scipy.optimize.nnls(A_2.T @ A_2 + I_reg_2, A_2.T @ J_post_2[:, i])[0]

    # Reconstruct a weight matrix of shape N_post, N_pre; the weight matrix
    # computed above has all excitatory and inhibitory pre-neurons in
    # separate blocks; we need to sort them back to the order they are
    # in the nengo pre-population
    W_reorg_2 =  np.zeros((N_post, N_pre))
    i_exc_2, i_inh_2 = 0, np.sum(1 * is_excitatory_2)
    for i in range(N_pre):
        if is_excitatory_2[i]:
            W_reorg_2[:, i] += W[:, i_exc_2]
            i_exc_2 += 1
        if is_inhibitory_2[i]:
            W_reorg_2[:, i] -= W[:, i_inh_2]
            i_inh_2 += 1

    # Forcefully shove these weights into the Nengo simulator; reset the
    # biases; divide by the gain (can't figure out where to reset the gain internally)
    sim2.data[con_direct_2].weights.setflags(write=True)
    sim2.data[con_direct_2].weights[...] = W_reorg_2 / sim2.data[ens_post_2].gain[:, None]
    remove_bias_current(sim2, ens_post_2)

    # Run the simulation!
    sim2.run(20.0)

    weights_2 = W_reorg_2
    print("Model-2 saved weights")
    print(weights_2)

    # Plot the results     
    ts = sim2.trange()
    xs_pre_2 = sim2.data[probe_pre_2]
    xs_post_2 = sim2.data[probe_post_2]

    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].plot(ts, xs_pre_2)
    axs[0].plot(ts, ts / 10.0 - 1.0, 'k--')
    axs[0].set_xlabel("Time $t$")
    axs[0].set_ylabel("Decoded value $x$")
    axs[0].set_title("Pre-population")

    axs[1].plot(ts, xs_post_2)
    axs[1].plot(ts, f(ts / 10.0 - 1.0), 'k--')
    axs[1].set_xlabel("Time $t$")
    axs[1].set_ylabel("Decoded value $x$")
    axs[1].set_title("Post-population")

    # Extract and plot the excitatory and inhibitory weights
    W_exc_2 = W_reorg_2[:, is_excitatory_2]
    W_inh_2 = W_reorg_2[:, is_inhibitory_2]

    print("Excitatory weights")
    print(W_exc_2)
    print("Inhibitory weights")
    print(W_inh_2)
    
    fig, axs = plt.subplots(1, 2, figsize=(10, 5))

    axs[0].imshow(W_exc_2, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[0].set_xlabel("Exc. pre-neuron index")
    axs[0].set_ylabel("Post-neuron index")
    axs[0].set_title("Excitatory weights")

    axs[1].imshow(W_inh_2, vmin=-0.01, vmax=0.01, cmap='RdBu')
    axs[1].set_xlabel("Inh. pre-neuron index")
    axs[1].set_ylabel("Post-neuron index")
    axs[1].set_title("Inhibitory weights")

    plt.figure()
    plt.plot(eval_points_2, activities_2)
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")

    plt.figure()
    plt.plot(eval_points_post_2, activities_post_2)
    plt.ylabel("Firing rate (Hz)")
    plt.xlabel("Input scalar, x")
    
    plt.figure()
    plt.plot(sim.trange(), sim.data[probe_post], label="post_model_1")
    plt.plot(sim2.trange(), sim2.data[probe_post_2], label="post_model_2")
    plt.legend()
    plt.show()
    
  1. when I am saving the scaled weight and reproducing them in model 2 the post-population decoded estimate is following the input curve which should not happen.

  2. I am not getting why we are doing this.

    sim2.data[con_direct_2].weights.setflags(write=True)
    sim2.data[con_direct_2].weights[...] = W_reorg_2 / sim2.data[ens_post_2].gain[:, None]
    remove_bias_current(sim2, ens_post_2)

and what are( sim2.data[con_direct_2].weights[…]) these weights for?

  1. Is it possible to save the tuning curves and reproduce them in the next run in such a that I can change the properties like intercept and max rate of a single neuron? I have been through many discussions but couldn’t find any leads.