How many neurons can be fully connected?


#1

How many neurons can I fully connect from one to another using Nengo Loihi?

At least for the v0.4.0 emulator, the answer seems to depend on whether I partition the ensemble into a bunch of sub-ensembles ($d$ ensembles, each containing $n$ neurons), even though the total number of neurons ($nd$) and total number of connections ($n^2d^2$), remains the same (every sub-ensemble is fully-connected to every sub-ensemble, including itself). In other words, it seems to depend on how the same number of virtual resources (neurons and connections) are being physically mapped.

     n  d   nd      ?
0  512  1  512  False
1  256  2  512  False
2  170  3  510   True
3  128  4  512   True
4  102  5  510   True
5   85  6  510   True
6   73  7  511   True

For example, in the above table, 4 ensembles of 128 neurons are okay, while 1 ensemble of 512 neurons are not. In both cases, there are 512 neurons and 512**2 connections.

Is there an equation that describes this in general? Is there a way to have nengo_loihi perform the optimal partitioning for a given ensemble or network configuration, or some helper functions for satisfying these constraints?

import warnings
warnings.filterwarnings("ignore")

from collections import defaultdict

import numpy as np
from pandas import DataFrame

import nengo
from nengo_loihi import Simulator
from nengo_loihi.builder import BuildError

def attempt(n, d):
    with nengo.Network(seed=0) as model:
        ensembles = [nengo.Ensemble(n, 1) for _ in range(d)]
        for ens1 in ensembles:
            for ens2 in ensembles:
                nengo.Connection(ens1, ens2, solver=nengo.solvers.LstsqL2(weights=True))
        
    try:
        with Simulator(model, progress_bar=None) as sim:
            pass
    except BuildError:
        return False
    else:
        return True
    
data = defaultdict(list)
nd = 512
for d in range(1, 8):
    n = nd // d
    data['n'].append(n)
    data['d'].append(d)
    data['nd'].append(n*d)
    data['?'].append(attempt(n, d))
    
print(DataFrame(data))

#2

Right now, the partitioning is very simple. We map one ensemble to one Loihi core. Each core has a fixed amount of memory, so when you split your neurons between more cores, then you have more synapse memory per neuron.

At some point this should absolutely be done better, but for now, it’s just this simple one-to-one mapping.