I am looking to benchmark three models by running them multiple times and averaging their results.
The requirement is that each of the runs uses the same seed on the input and that the runs are independent from each other.
To this end I used the following logic:
for i in range( num_runs ): learned_model_mpes = LearningModel( ..., seed=seed + i ) control_model_pes = LearningModel( ..., seed=seed + i ) control_model_nef = LearningModel( ..., seed=seed + i ) with nengo_dl.Simulator( learned_model_mpes ) as sim_mpes: sim_mpes.run( sim_time ) with nengo_dl.Simulator( control_model_pes ) as sim_pes: sim_pes.run( sim_time ) with nengo_dl.Simulator( control_model_nef ) as sim_nef: sim_nef.run( sim_time )
with this snippet at the beginning of each LearningModel
:
nengo_dl.configure_settings( stateful=False )
Is this overkill to ensure independence between the runs?