Train AI agents with reinforcement learning

An integral part of any reinforcement learning setup is providing RL agents with a reliable simulated environment. This is best accomplished by using a powerful, general-purpose simulation software with fast, consistent, and streamlined connections to RL algorithms. For experts or researchers who want to use AnyLogic models as their training environment for reinforcement learning, there are two available options: AnyLogic Cloud’s API and the ALPyne library.

Use Cases Workflows & Tools

Case 1: Optimal control of complex dynamic systems

An integral part of any reinforcement learning setup is providing the AI agents with a reliable simulated environment. This is best accomplished using a powerful general-purpose simulation software with fast, consistent, and streamlined connections to RL algorithms. The policies learned from trainings can eventually be deployed the real system which the simulation model was built from.


Case 2: Verify and validate simulation model

At its core, the reinforcement learning training process is comprised of an artificial explorer that examines and scrutinizes all corners of a simulation environment. With an appropriate reward schema, this mechanism could be used to partially automate some commonly repetitive aspects of the verification and validation process, allowing more thorough testing of the robustness and fidelity of the simulation model. Although this approach is still at its infancy, it has the potential to become an integral part of the verification and validation process for all types of models.


Case 3: Comparing the efficacy and performance of different RL algorithms

There are repositories of standardized RL environments for researchers to test and compare their algorithms on comparable playing fields. However, these widely used environments do not provide the variety and complexity that are commonplace in real simulated systems. A general-purpose simulation platform can provide sophisticated training environments that are able to be easily customizable, yet also can provide varying levels of complexities and complications which are unique to each industry and applied scenario.


Case 4: Serving as a comparison metric to assess the efficacy of human-designed policies

Analysts can choose, design, or curate all sort of rule-based, algorithmic, or heuristic-based solutions. Having access to a baseline solution, in the form of an RL policy, is extremely valuable to shed light on the efficacy of curated and manually shaped solutions - especially when these solutions are for scenarios that an absolute optimum is unattainable.

Workflows and Tools

For experts or researchers who want to use AnyLogic models as their training environment for reinforcement learning, there are two available options: AnyLogic Cloud’s API and the ALPyne library.

AnyLogic Cloud and its API

AnyLogic Cloud and its API

Upload the simulation model to the AnyLogic Cloud and use the cloud API to communicate with user-assigned AI frameworks.

This option is for experts with manually defined RL training code who wish to train using simulation environments hosted on AnyLogic Cloud. Owners of AnyLogic Private Cloud have access to an Python API which takes care of running the models on a scalable, server-based platform. At the moment, this API only supports training episodes that do not require interactive communication since it provided the reward (or feedback) at the end of each episode.

Connection with ALPyne

Connection with ALPyne

Connect exported AnyLogic models and communicate with AI frameworks in a local Python environment via ALPyne.

For those interested to test how a manually curated RL setup works with an AnyLogic model on a local machine, ALPyne provides a way do so. This Python-based package allows you to communicate with an AnyLogic model exported from the RL Experiment.

Learn more