Trying to Understand Network Restrictions

Hi,
I’ve been trying to run a network using Nengo Loihi and have had a hard time conceptualizing where the various restrictions have come from, specifically the IN_AXONS_MAX and MAX_SYNAPSE_BITS. So, a couple questions based on the below code:

  1. I would think that given an input size and a number of neurons, the number of input axons would be n_reservoir_neurons + input_size. However, it seems that in Nengo Loihi, the calculation is (2*input_size + n_reservoir_neurons), which causes a 1000 neuron, 1600 input size to crash with:
    nengo.exceptions.BuildError: Input axons (4200) exceeded max (4096).
    Where do the extra axons come from?

  2. The total number of synapses also doesn’t make sense to me. The below network seems like maximum number of synapses should be n_reservoir_neurons2 , which should be well below the ~1 million total number of synapses for a Loihi core. However for a 200 input, 200 neuron network, the program crashes with:
    nengo.exceptions.BuildError: Total synapse bits (1881600) exceeded max (1048576)
    Why does this break the synapses limit?

Help appreciated! Right now it seems like I have to shrink each ensemble to minuscule sizes to make networks run.

neuron_type= nengo.LIF()
gain = nengo.dists.Uniform(.05, .05)
bias = nengo.dists.Uniform(0, 0)

n_reservoir_neurons = 1000
input_size = 1600
im1 = np.random.rand(input_size)
U = np.random.rand(n_reservoir_neurons, input_size)
J = np.random.rand(n_reservoir_neurons, n_reservoir_neurons)

with nengo.Network() as model:

    # link the input
    f_in = nengo.Node(im1, size_out=input_size)

    # create reservoir neurons
    A = nengo.Ensemble(n_neurons=n_reservoir_neurons, dimensions=1, neuron_type=neuron_type, gain=gain, bias=bias)

    # feedforward input connections
    nengo.Connection(f_in, A.neurons, synapse=None, transform=U)

    # recurrent fast connections
    nengo.Connection(A.neurons, A.neurons, transform=J, synapse=.005)

    with nengo_loihi.Simulator(model,progress_bar=False, target='sim', dt=.001) as sim:
        sim.run(1)

Hello,

A Loihi core has just over a million bits to store all synapse information. In nengo_loihi, we always use 8 bit weights for each synapse, so the minimum number of bits required is 8 times the number of synapses. Sometimes more bits can be required depending on other factors (for example, if indices have to be stored with the weights).

In practice, this means if you’re doing an all-to-all (dense) connection, you can’t go much larger than 300 * 300 weights. For your application, two potential resolutions are making the weights sparse, or trying to split your reservoir across more cores (using more than one ensemble to represent the reservoir).

The reason you end up with more input axons than expected is likely because we use an on-off neuron encoding for the input (i.e. one “on” neuron and one “off” neuron for each input), which results in twice as many input axons.

The current master branch of nengo_loihi provides better support for sparse weights when used with the nengo development version and Nengo’s new Sparse transform, if you choose to go that route. Sparse matrices require an index to be stored with each weight, though, so you’ll need a non-zero rate of under 50% to get any benefit from sparse matrices in terms of storage space.