I’m trying to visualise how the weights of an ensemble change over time while using a learning rule. I understand that I need to use a weights probe, as shown in this example, however I’m a bit confused on how to interpret the results.
The probe returns an tbyDbyN tensor, where:
 t is the timesteps
 D is the dimensions of the ensemble
 N is the number of neurons
Assuming the output is connected to a D dimensional node, how am I supposed to understand the various values associated to this tensor? For example, assume a tensor called weights
.
Would it be correct to say:

weights[:, :, 0]
shows the change in weights over time for all weights leaving the neuron at index 0? 
weights[:, 0, :]
shows the change in weights over time for the zeroth dimension of all neurons? weights[0, :, :]
 To access all the weights for analysis (I’m trying to find which ones keep returning to the same value instead of converging), I should change the dimensions of the tensor to tbyD*N?
Finally, was this documented anywhere? I couldn’t find it in the connection documentation.