I believe the difference is that if you look at your tf.layers.dense
version, the final layer is just a linear readout (no neural nonlinearity):
# linear readout
x = nengo_dl.tensor_layer(x, tf.layers.dense, units=10)
but in the nengo.Connection
version, you are including neurons in that final layer:
x, conn4 = nengo_dl.tensor_layer(x, neuron_type,
transform=nengo_dl.dists.Glorot(),
shape_in=(10,), return_conn=True)
If you change your final layer to something like
x_new = nengo.Node(size_in=10)
conn4 = nengo.Connection(x, x_new, transform=nengo_dl.dists.Glorot(),
synapse=None)
then you should see more similar performance. Note that we’re using a Node
with no function to act as a simple linear output.