Modeler’s Tips: Unreproducible Run Results



We at the AnyLogic Support Team very often get asked by the users: what to do if I can not reproduce model results? That is why I decided to share the list of the most common sources of randomness in the model.

So, if you run the model with the Fixed seed of random number generator, but the results are not reproducible, please check your model according to the list below.

For all experiments:

  1. The model should not contain HashMap or HashSet collections. Use LinkedHashMap.
  2. object.hashCode() and System.identityHashCode() functions should not be called in the model.
  3. System.currentTimeMillis(), new Date() and similar functions that returns current system time should not be used in the model logic.
  4. Results are reproducible only in the case of complete model restart (model window is closed), but multiple experiments in a row (Start -> Stop -> Start ->) are different. It means that each model run leaves “garbage” in user data. For instance, static variables are common to all model runs. If one iteration changed the value, another one would use the modified value.
  5. In case of external data sources, it makes sense to check that input data is not changed while the model is running: new iteration will use modified input data.
  6. User-defined Random class (e.g. Collections.shuffle() or Math.random()) should not be used in the model. Use getDefaultRandomNumberGenerator() function to get access to the model random numbers stream.
  7. If conversion from date to model time is used, model start/stop date should be fixed.
  8. Custom parallel threads (if any) should be correctly synchronized and put in order.
  9. Dynamic properties of shapes should not contain any functions that change something in the model.
For experiments with multiple iterations:

  1. The experiment should not contain static variables or fields that changed from within iteration.
  2. Random Number Generator in Custom Experiment should be reset or reinitialized before each new model run.
  3. Optimization experiment with parallel evaluations may give different results each time. A new set of parameters values is formed based on previous solutions. The number of these solutions at a specific moment of time can vary because of the different execution speed of each iteration. Disable parallel evaluations to get the same result each time.
  4. If results of the experiment with multiple iterations do not correspond to results of Simulation experiment, check model start/stop time, random seed, selection model for simultaneous events, and other engine settings.

Hope this helps in your work! Subscribe to our blogfor more updates and useful tips!