Hi @Alperen, welcome to the forums! Conceptually, the ability to do this kind of serial connection learning is possible, but the way in which to do it with Nengo is not the same as you would do with a traditional ANN. The reason for that is because we respect more biological constraints than traditional ANNs.
The error-minimization rule that is described in the example you linked is called PES. In contrast to other learning rules that minimize error (notably backpropagation), PES imposes the following additional constraints:
- The error signal must be computed by the network itself.
- The error signal must be explicitly projected to the connection that minimizes that error signal.
In the Nengo examples, we go through how to build networks to compute the error signal for simple functions like communication channels and multiplication. If you wanted to faithfully reproduce backpropagation, then you would build a network with ensembles computing two error signals: the hidden-output error signal, and the input-hidden error signal. Those error signals would be projected the hidden-output and input-hidden connections, respectively.
If you were to do this, you would find that some quantities are difficult to calculate in a Nengo network because of other constraints. For example, in order to compute error signals for the purpose of backpropagation, you need to know the derivative of the neural activation function. This isn’t information that a neural network would generally be able to know (a neuron computes a function on its inputs, but doesn’t have meta-knowledge of what function that might be), though it can be approximated with several different types of networks (see Tripp & Eliasmith, 2010 and Aaron Voelker’s work).
Stepping back a bit, while it may seem like backpropagation is something that a neural simulator with AI applications cannot do without, the NEF and PES rule fill the same role as backpropagation in most cases. Typically you use backpropagation to learn some mapping between input-output pairs. In traditional neural networks, this is done by setting up one or more layers of “hidden” neurons between the input and output layer in order to capture the nonlinear interactions between neurons in the input layer, where those input layer neurons each represent one input signal.
In Nengo, we typically use distributed representations in which many neurons represent each input signal. If there is reason to believe that there will be nonlinear interactions between input signals, then we have neurons represent multiple signals (see the combining example). By doing this, we no longer need a layer of hidden neurons between the input and output layers; we can compute nonlinear functions in a single input-output connection (see the multiplication example).
What if we don’t know the function mapping inputs to outputs? We can approximate it automatically by specifying the inputs and outputs in the connection itself:
nengo.Connection(pre, post, eval_points=inputs, function=outputs)
Or we can learn the mapping online by making Nengo objects to compute the error signal as the simulation is running (as in the communication channel example, or the product learning example).
To say all this in a slightly different way, minimizing error in order to compute functions is at the core of how Nengo works, and doesn’t require the multi-layer architecture that other ANNs use. In order to give better advice as to how to use Nengo to solve your problem, we would need to know more about the problem being solved. There are situations where many layers and complex training techniques yield far better results; these things have been done with Nengo, but setting them up has not yet been packaged into a simple example (see, for example Hunsberger & Eliasmith, 2016).