Hi @mraptor, welcome to the forums!
I have a few corrections to the way that you’re thinking of the simulation process. Nengo does a lot under the hood, so I won’t try to explain it all at once, but hopefully I can shed a bit of light onto the underlying procedures.
First, it is easier to reason about what Nengo is doing by thinking about what happens on each timestep, rather than thinking about what is happening at each part of the model over time. So, we typically do not think about a whole spike train coming into the decoder; instead, what’s important is the filtered neural activity at the current moment in time. The vector of neural activities is what is weighted by the decoding weights to get the decoded vector value at the current moment in time.
For ensembles, the value is never explicitly transformed by the desired function. The desired function is used to determine the decoding weights that are used to weight the neural activities; after solving for decoding weights, the function is never used again.
The tuning curves are determined based on the neuron model being used. They’re based on the response curves – i.e., the expected firing rate as a result of injected current. The response curves are projected into the ensemble’s vector space to get tuning curves, which depend on the values that the ensemble is representing. In normal neural networks, the “label” tells you the output corresponding to a particular input. For us, we generate “labels” by evaluating the function at many randomly selected points of the vector space represented by the ensemble.
I hope that answers some of your questions. Like I said, it’s probably not realistic to get a full understanding of how Nengo works through a few forum posts, but hopefully we can point you to some other resources to fill in the gaps.
One resource that might be useful for you is this minimal description of NEF algorithms. It is essentially a super pared down version of what Nengo is doing under the hood.