Pathmind Reinforcement Learning Experiment in AnyLogic 8.7.4

AnyLogic release and Pathmind Reinforcement Learning Experiment

A new experiment in AnyLogic 8.7.4 links to Pathmind’s reinforcement learning (RL) platform, helping simulation modelers and artificial intelligence practitioners lever the synergy between simulation and AI. For large-scale and complex systems, solutions utilizing the Pathmind RL platform are outperforming well-established heuristics.

This blogpost introduces the integrated AnyLogic Pathmind experiment and highlights two industrial case studies from Engineering Group that use the Pathmind RL platform.

Pathmind Reinforcement Learning experiment

With the release of AnyLogic version 8.7.4, simulation modelers now have integrated access to the Pathmind RL SaaS platform from within AnyLogic. The Pathmind RL platform abstracts the complexity of applying reinforcement learning and lets modelers concentrate on their simulations and results.

To get started with the experiment, make sure you have AnyLogic 8.7.4 – either run the update within the application or get the download – and follow the RL experiment guide in AnyLogic help. The experiment joins the Microsoft Bonsai experiment integration. You can learn more about AnyLogic and simulation on our dedicated AI and Simulation page.

At the INFORMS Business Analytics 2021 event, Luigi Manca, Director of Simulation and Digital Twin Practice at Engineering Group, detailed two industrial examples in his presentation Deep Reinforcement Training and Machine Learning Applications for Industry 4.0. Here are summaries of the examples with demonstration cloud models and links to related materials.

Optimal Operation and Maintenance

For a twenty-turbine windfarm with three service crews, a maintenance policy developed using reinforcement learning performed more than 30% better than industry-established heuristics.

Pathmind reinforcement learning versus traditional wind farm servicing heuristics

Traditional approaches to scheduling maintenance depend on fixed thresholds for variables such as the remaining useful life (RUL) of equipment, and fail to capture operational and environmental factors. These traditional approaches also use policies such as first-in-first-out (FIFO) to schedule service crew tasks which leads to suboptimal routes between tasks.

Engineering’s Pathmind RL policy produces dynamic results that account for a wide range of state inputs and allow for greater optimization.

The Pathmind RL policy considered:

  • the availability of maintenance crews,
  • variability in energy demand and wind turbine energy production,
  • the random nature of system element degradation and failure,
  • uncertainty from Prognostics & Health Management (PHM) algorithms,
  • the long-time horizons associated with energy system operations.

The more data is available, the more reinforcement learning helps determine better policies. The method acts on large quantities of information and produces results based on interdependencies that traditional methods can miss, and which may seem counterintuitive. The benefit of a simulation model is that results can be visualized and verified.

You can try the Pathmind RL policy against other heuristics in the cloud model below.

To learn more about the development of the example, see the presentation video at the end of this blog, read the related paper, or read Pathmind’s take in customer success blog.

Manufacturing optimization

Reinforcement learning optimized the movements of heavy machinery and reduced bottlenecks on a complex and highly automated production line for customized orders. The result of Engineering’s application of Pathmind RL in the factory increased the number of coordinated objects by 66% and reduced the number of movements by 11%.

Operations in the factory were captured in an AnyLogic simulation model using a hybrid agent-based and discrete event approach and the process modeling library. The model provided a valid simulation environment for reinforcement learning on the Pathmind RL platform.

The model and RL platform together give the factory operators a flexible platform for managing production line scheduling – production is quicker and more adaptable to change than systems based on manual decision making or traditional heuristics.

An example of machine movements can be seen in the online model.

The project followed earlier work applying AI to resolving bottlenecks at the factory. You can read in more detail about the problem and development of the RL model in the industrial problem resolved by AI and simulation.

Deep Reinforcement Training and Machine Learning Applications for Industry 4.0

Luigi Manca, Director of Simulation and Digital Twin Practice at Engineering Group, presents the two cases.

The AnyLogic team is always working to improve simulation modeling. Subscribe to our newsletter and stay up to date with developments and events.

Related posts