Hi! Great platform; very impressive and slick, especially for us poor souls who have had to learn the ropes using the likes of Neuron. I am particularly impressed by the powerful encoding arising from tuning curves.
I would like to combine this encoding/decoding mechanism with more ‘standard’ computational neuroscience; plain old unclamped LIFs. However, it seems I cannot get my usual neuroscientific ‘hello world’ to work: a simple, single, regularly firing LIF with a firing threshold below its rest potential: the response curve seems pretty hardwired - is it possible to have purely spike/cell dynamics driven Ensembles? If so, how? Thanks!
Hi @oddman. Welcome, and thanks for your interest and feedback!
Have you taken a look at the NEF summary example? It might answer your questions, in particular by using
max_rates to shift and scale the neural response curves (or, alternatively, the
gain – which are equivalent but lower-level). The default is for these to be randomly distributed, but they can also be set for each neuron (or even just a single neuron). A voltage probe can be added to measure the membrane voltage of each neuron.
Let us know if this helps. If you are still stuck, you could post some code that you’ve tried and we could work from there.
Thanks for the quick response!
EDIT: nevermind, I figured out how I can approximate it. I just need to set a fixed input and treat the intercept of the tuning curve as the leak potential.
Yes, I did read basically the entire documentation and I did grasp that
gain affect the response curve (though I do not yet have an intuition of exactly how yet - will have to tinker with it some more).
Let me try to rephrase my question: I am not (only) interested in neuronal assemblies as representations of explicit multidimensional vectors (though that certainly is a nifty concept). I am interested in simulating spiking networks at scale while manipulating in- and output. However, I want to manipulate weights and cell dynamics directly, not indirectly via tuning curves. It seems that the ‘learning’ connections already allow for explicit manipulation of weights, that is nice. (And I would follow up my ‘hello world’ explorations with seeing to what extent I can customize synaptic learning rules.) However, diving into the dynamics of a single neuron it appears hard to escape implicit definition via tuning curves. Hence, my question:
Is it possible to disable the direct effect on membrane potential from tuning curves?