Thanks for letting me know your background! Although Nengo can be used with Deep Learning, the way Nengo creates neural nets isn't easily related to back-prop, but I think you already know this.
So lets say I have 2 neurons firing at the same frequency and I do a weighted combination where the first weight is 1, and the second is 1/2.
I'm guessing you mean decoder weights.
Suppose this recombination is happening at a dendrite.
So before an encoder.
Now lets suppose the first neuron is firing twice as fast as the second neuron, but both have equal weights. Again I would think the first neuron contributes twice as much as the second.
Is that true?
neuron A = 2*1 = 1
neuron B = 1*0.5 = 0.5
If that's what you meant, than it is true.
Do time-constants and filtering affect this?
Yes, but usually you use the same time-constant on all neurons from an ensemble, unless you're trying to get some fancy dynamics. Time constants filter spikes into a continuous signal.
So if I want to recreate an input signal (that was feeding into these 2 neurons), then a weighted combination of their outputs will create a current (in biology this would be at a dendrite that the two dendrites synapse on) that imitates the input current over time?
Right. Linear of combinations of decoders can imitate the input that the neuron population is seeing and feed it into the next population.
What happens if you have more than one dimension so you are trying to recreate both the amplitude and direction of a vector over time?
You assume two-dimensionally tuned neurons and proceed as the one-dimensional case. This is covered at the end of the first lecture notes.
Does that answer most of your questions?