Streamlining the connection to trained ML models with the ONNX Helper Library

AnyLogic ONNX plug-in logo of geometric shape with the word ONNX to the right and a blue gradient background

There are many cases where it’s desirable to incorporate trained machine learning (ML) models into your simulation model. The following are some concrete examples, based on our ML Testbed cases:

  • Replacing static or distribution-based travel times with an ML model, trained on real-world data, that uses date and time inputs to predict travel duration.
  • Incorporation of the same ML model used in a real-world refurbishing facility, used to classify the repairability of arriving components, into a simulation model of the facility for increased accuracy.
  • Showing visually and statistically the impact and overall performance of implementing an ML model trained to control machine speeds (e.g., using reinforcement learning) prior to it being deployed in the real world.

In these types of cases, input data is retrieved and preprocessed before being used to train an ML model with one of the many available ML libraries (e.g., TensorFlow, caret, DL4J). After a desired policy is found, it can be exported in a file type and later called on to provide predictions (e.g., by edge devices or in simulation models).

One of these file types, with the “.onnx” extension, is from ONNX, the Open Neural Network Exchange. Its purpose is to provide an open ecosystem that helps avoid ML models being locked into one specific framework. ML models in the ONNX format can be imported and called upon from many different frameworks and it has both cross-platform and cross-language support.

AnyLogic ONNX workflow graphic illustration showing data from ML model flowing to a simulation model and then via ONNX to a series of simulation experiments

Previously, AnyLogic users were limited to accessing ONNX-compatible libraries through either Java (requiring programmer-level knowledge) or via Python, using the Pypeline add-on (at the cost of computation overhead).

Now, thanks to AnyLogic’s business user focus, a new library makes ONNX access easier and more efficient. The ONNX Helper library is an efficient add-on made to help all users streamline ML workflows.

By incorporating this add-on library into your AnyLogic environment, your models can access its functionalities, just like with any other built-in library. It’s simply a matter of adding the helper object to your model and configuring it to reference an ONNX file. Then, from anywhere in your AnyLogic model, you can call a single “predict” function to query outputs.

Note that this add-on library is not used for training ML models, but for querying already trained ML models. Given that AnyLogic is optimized for general-purpose simulation, and that properly training an ML model can be complex and time-consuming, training is optimally performed in a dedicated and specialized environment.

Further information about the library — including installation, usage, and examples — can be found on the AnyLogic ONNX project page.

Related posts