Getting/Interpreting neural weights

Hi All,

This is Larry. I did the coin-flipping task at this summer’s brain camp
(which I loved). I’m still working on my project and have confirmed that my
model consistently underestimates alternations at a Probability of
Alternation around 0.6. (This is the same area where people have a bias and
will underestimate it as well.)

Now, I want to figure out why. My current theory is that the learning
process is not stable/does not converge under this substantial uncertainty.
To research this, I’ve created a probe to look at the neural weights. I
have a two dimensional input ensemble (200 neurons) connected via PES
learning to a one dimensional prediction (100 neurons). When I use a probe
on the connection to get the weights, it is 3 dimensions:

  • dimension 0 is the time during the learning. The range for this index
    varies depending upon the ‘sample_every’ parameter.

  • dimension 1 only has a range of 1 (value = 0)

  • dimension 2 has a range of 200 (the size of my input ensemble)

I expected the weights to have 3 dimensions (time, 200 for the input
ensemble, 100 for the output ensemble). I seem to be mis-understanding
something. Are not each neuron in the input connected to each neuron of the output? What are the definitions of the dimensions? Any thoughts or advice?

Best,

Larry

Oops! Sorry. I found an answer to my question on the forum. The dimensions of the weight probe results are: time X dimension of output X neurons of input.

While I’m here… Can anyone steer me toward work on the instability in weight values during simulated learning relating to real world phenomena?

Best,
Larry