Reservoir example with spiking neurons

Before I answer your new questions, I’ll address a question you had before that I didn’t answer:

I spoke to my colleagues and they informed me that @tcstewar, @arvoelke and @drasmuss have all worked on some version of a reservoir computing network in Nengo before. In order of recency, @tcstewar has code that works with the lastest-ish version of Nengo. @arvoelke’s code has only been throughly tested with Nengo 2.0, and @drasmuss has code that only works with Nengo 1.4.

@tcstewar has a Jupyter notebook here that steps through how to set up a network with recurrent neuron-to-neuron connections. While this is not specifically an LSM, it can serve as a basis for an LSM network (since the structures are similar). @arvoelke has perhaps the most comprehensive example here where he constructs a reservoir network in Nengo. However, his code uses custom Nengo code (e.g., his custom Reservoir network) from his NengoLib library and this has only been tested to work with Nengo 2.0. With some work, it may be possible to get his NengoLib reservoir computing network to work in the latest version of Nengo… it may even be possible to extract just the reservoir.py file and use it as a standalone network in Nengo (my quick look at the file don’t reveal anything that would stop it from working with the latest version of Nengo).

The test_spike_in2.py code demonstrates quite the opposite actually. The code is separated into 2 parts. The first part builds and runs a “reference” neural model. The second part uses recorded spike data from the first neuron model as an input signal. In the first neural model, the probe is attached to the neuron output of ens1. Since the input signal is connected to the input of the ens1 neural population, the ens1 ensemble is essentially “encoding” the input into a series of spike trains.

In the second part of the model, the encoded spike train is fed through a weight matrix that “decodes” the spike train into information that ens2 can use as an input. This weight matrix is determined by the NEF algorithm. To learn more about this, I recommend you watch the Nengo Summer School youtube playlist I linked above, or read the documentation here

I’m not entirely clear which layer you are asking about here. Are you asking about the process to record spiking output from an output layer that consists of neurons? In any case, in Nengo, we tend to apply filters to spike outputs to “smooth” them. These smoothed spikes are then fed through a set of weights (known as decoders, or you can think of them as “output” or “readout” weights) that linearly combine these signals into real-valued (no-spiky) signals.

In a sense, yes. However, I would not use the word “tune” as “tuning” implies some sort of learning process. Rather, the solvers use a mathematical algorithm (e.g., Least-squares regularization) to solve for these weights. The process by which this is done is described in the documentation of the NEF.
I recommend checking out these examples to see how Nengo (and the NEF) can be used to create neural networks that “compute” functions without needing a learning process at all.

It is possible to create random connections yes. When you do this:

nengo.Connection(ens1.neurons, ens2.neurons, transform=<weight matrix>)

Nengo will create a connection between all of the neurons in ens1 and all of the neurons in ens2. You can set the <weight matrix> to a bunch of random values to create random connections. If you set any element to 0, it will effectively mean that the respective neurons are not connected. I should note that Nengo operates on the “ensemble” (a group of neuron) level, rather than on the individual neuron level. This is done to increase the efficiency of the computation of the neural simulation.

Yes you can. There are multiple ways to do it. You can define a function which references an object that you can change the value of. Or, the way I like to do it is to define a class where all of the data can be stored and manipulated. You can then pass a method of the class as the node’s input function, and modify the data (i.e., modify the class information) without touching the Nengo model at all:

class InputFunc:
    def __init__(self, ...):
        self.data = ...

    def step(t):
        return self.data[...]

my_input = InputFunc()

with nengo.Network() as model:
    inp = nengo.Node(my_input.step)
    ....

# Run first simulation
with nengo.Simulator(model) as sim:
    sim.run(1)

# Modify data
my_input.data = ....

# Run second simulation
with nengo.Simulator(model) as sim:
    sim.run(1)

I sort of touch on the reason for this earlier. In Nengo (or rather, in the NEF), the thought paradigm is that with the appropriate set of decoding weights, one can take a spike train, filter it through a synapse, and apply the decoding weights to get out a real-valued time varying signal that represents what your network is supposed to produce / calculate. In this way, the way information is encoded in Nengo straddles the line between spike-pattern coding and a rate-based coding where it is both, and neither at the same time (it’s very confusing… i know… it takes a while to get your head wrapped around this concept). For this reason, Nengo probes can be configured to apply a set of decoder weights (this is done by default on certain objects) and a synapse (to filter the spike data). By default, when you probe a .neurons object, Nengo will not apply any decoding weights, nor will it add a synapse, so you will get the raw spikes.