@xchoo @zerone Thank you for your answers.
Considering your answers and play a bit more with the nengo layers. Considering the following model I guess the structure of the weights would look like this:
with nengo.Network(seed=0) as net:
# set some default parameters for the neurons that will make
# the training progress more smoothly
net.config[nengo.Ensemble].max_rates = nengo.dists.Choice([100])
net.config[nengo.Ensemble].intercepts = nengo.dists.Choice([0])
net.config[nengo.Connection].synapse = None
neuron_type = nengo.LIF(amplitude=0.01)
# this is an optimization to improve the training speed,
# since we won't require stateful behaviour in this example
nengo_dl.configure_settings(stateful=False)
# the input node that will be used to feed in input images
inp = nengo.Node(np.zeros(2* 1))
x = nengo_dl.Layer(tf.keras.layers.Dense(1))(inp, shape_in=(2*1,1))
x = nengo_dl.Layer(neuron_type)(x)
x = nengo_dl.Layer(tf.keras.layers.Dense(2))(x)
x = nengo_dl.Layer(neuron_type)(x)
x = nengo_dl.Layer(tf.keras.layers.Dense(3))(x)
x = nengo_dl.Layer(neuron_type)(x)
out = nengo_dl.Layer(tf.keras.layers.Dense(units=4))(x)
out_p = nengo.Probe(out, label="out_p")
out_p_filt = nengo.Probe(out, synapse=0.1, label="out_p_filt")
The weights of this models are:
[<tf.Variable 'TensorGraph/base_params/trainable_float32_7:0' shape=(7,) dtype=float32, numpy=array([1., 1., 1., 1., 1., 1., 1.], dtype=float32)>, <tf.Variable 'TensorGraph/dense/kernel:0' shape=(1, 1) dtype=float32, numpy=array([[-1.1600207]], dtype=float32)>, <tf.Variable 'TensorGraph/dense/bias:0' shape=(1,) dtype=float32, numpy=array([0.], dtype=float32)>, <tf.Variable 'TensorGraph/dense_1/kernel:0' shape=(2, 2) dtype=float32, numpy=
array([[ 0.02475715, -0.13831842],
[-0.2240473 , 1.206355 ]], dtype=float32)>, <tf.Variable 'TensorGraph/dense_1/bias:0' shape=(2,) dtype=float32, numpy=array([0., 0.], dtype=float32)>, <tf.Variable 'TensorGraph/dense_2/kernel:0' shape=(2, 3) dtype=float32, numpy=
array([[ 0.72141063, 0.29407883, 0.0322665 ],
[-0.23862988, 0.1772492 , -0.9892268 ]], dtype=float32)>, <tf.Variable 'TensorGraph/dense_2/bias:0' shape=(3,) dtype=float32, numpy=array([0., 0., 0.], dtype=float32)>, <tf.Variable 'TensorGraph/dense_3/kernel:0' shape=(3, 4) dtype=float32, numpy=
array([[-0.4879959 , -0.48369777, 0.00672376, 0.50649774],
[ 0.20511496, 0.6891403 , -0.42060506, 0.21907902],
[-0.55083364, 0.68524706, -0.01406157, -0.00568473]],
dtype=float32)>, <tf.Variable 'TensorGraph/dense_3/bias:0' shape=(4,) dtype=float32, numpy=array([ 0.05611312, 0.01365218, 0.05257304, -0.10704393], dtype=float32)>]
According to my understanding now:
The first shape=(7,) are the weights of the nueron_type layers used in the whole model and lets calls this array as A and is empty.
For first dense layer: Kernel shape=(1,1) and bias shape = (1,) as only one neuron is used with one weight connection as the network layer is taking one 1 input at a time. When all input (input=2) are passed from dense layer, it is fed to the neuron_type layer who weights (2 weights) are appended A.
Next dense layer is with 2 neurons: so kernal is (2,2) and bias is (2,). The output of this layer will be (2,1) which will be fed to neuron_type layer with 2 weights values and are appended to A.
Next dense layer is with 3 neurons: so kernel is (2,3) and bias ( 3,). The output will be (3,1) which will be fed to neuron_type layer with 3 weights values and are appended to A.
This makes the total no. of weights for the neuron_type =7 as indicated by the first shape (7,).
Does this make any sense? am I correct in interpreting?
If yes, then what these weight values of the neuron_type indicate? is the voltage threshold of each neuron? i.e. if the membrane potential is greater than this threshold it fires?
Thank you for your reply in advance.