AnyLogic Cloud 2.4.0 – optimization experiment at scale

Gregory Monakhov is an AnyLogic Cloud and Anylogic 9 product owner. He previously worked in tech support for 7 years, so he is uniquely placed to ensure that customers' needs are continuously being met.

Our latest release is here and features many interesting developments. However, the main focus of this blog post is going to be the new experiment types: optimization and optimization with replications. We will break down, step-by-step, how you can upload your model to AnyLogic Cloud and run these new experiments.

Contents:

  1. What is an optimization experiment
  2. An idea becomes a reality
  3. Necessary feature for an optimization experiment
  4. How to perform an optimization experiment
  5. Stochastic models
  6. Moving forward

Optimization experiment

Since the very beginning of our AnyLogic Cloud product, we eagerly wanted to support an optimization experiment. In 2017, when AnyLogic Cloud was first released, users could choose “Optimization experiment” in the New experiment menu, and then a “Coming soon” message would appear. At that time, we believed it would not take long to implement it.

This experiment type is probably the most complex since the model run results are passed into an optimizer, which then generates the next set of model input values to be run based on the already executed run results. Things become especially complicated due to the distributed nature of AnyLogic Cloud, where each model run is conducted in a separate secure space. In this way, the optimizer cannot be embedded into the model itself (like in AnyLogic desktop); it should be a separate service as well.

To try to implement this experiment, we conducted extensive research and development on many different solutions available on the market. Unfortunately, many were rejected due to poor results in black box optimization, insufficient performance, and other issues. At some point, we put an end to our R&D and removed the “Optimization experiment” from the New experiment menu.

An idea becomes a reality

Fast forward to 2023, and here we go again with completely new AnyLogic 8 and AnyLogic Cloud teams. Both AnyLogic 8 and Cloud share the same optimization module, which should be interoperable, but still, there are many exciting challenges for everyone.

Do we have enough existing mechanics, like input widgets and output charts, to implement the optimization experiment? Should we adjust the load balancing to get better performance? What interface should be used for interaction with optimization algorithms? What functionality should be available in the initial release of the feature?

The “optimization” challenge caused a lot of discussions, intense debates about what users wanted, and so on. In the end, we decided to follow the Pareto principle and focus on those 20% of development efforts that provide 80% of the feature functionality. In this way, we deliver the feature to customers, collect feedback, and schedule further improvements based on the feedback.

Optimization experiment minimum viable feature

What are these “80% of the feature functionality?” There is a how-to video about the optimization experiment in Anylogic, which includes all the main optimization features: objectives, decision variables, requirements, and experiment results. In AnyLogic Cloud experiment framework terms, they are: model outputs, model inputs, and experiment dashboard.

  1. Objective = model outputs
  2. Decision variables = model inputs
  3. Requirements = model outputs
  4. Experiment results = the best model input values plus the respective model outputs visualized on the experiment dashboard

The tricky moment here is that optimal input values are part of the experiment output. Nonetheless, this is the minimum set of features necessary for a viable optimization experiment. Of course, there are a lot of other features, such as the optimization experiment API, constraints, live indication of the optimization experiment progress, and so on. But these are not core features, and we can release them separately from the core functionality.

How to perform an optimization experiment in AnyLogic Cloud

AnyLogic Cloud is a model execution platform, so first of all, you need to upload a configured model to the Cloud. If you work in AnyLogic 9, you may simply press the “Open experiments” button without any special preparation.

However, in the case of AnyLogic 8, you need to make sure your Objective and Requirements expressions are encapsulated in the Output element available in the Analysis palette. Then, you need to define the respective model inputs and outputs in the Run Configuration object. Let’s take a detailed look at this using an example model.

Preparing an AnyLogic 8 model

This Activity Based Cost Analysis example model is a simple manufacturing model designed for learning purposes. Here we optimize cost per product, which depends on the number of resource units, conveyor speed, and mean process delay time.

Here is how the optimization experiment is designed in AnyLogic 8:

There is an expression root.totalCostPerProduct() which defines the objective and the respective four parameters (decision variables).

The properties view with objectives and parameters highlighted

Optimization experiment in AnyLogic 8

In addition, there is a requirement expression root.seizeA.queue.capacity - root.seizeA.queue.size() in the Requirements section.

The requirements section with the requirement expression highlighted

Optimization experiment requirement

If the above requirement is enabled, then the Seize block queue should be full at the end of the simulation run.

Now, as you may know, when you export a model to AnyLogic Cloud, the AnyLogic 8 experiment and its code are not exported. AnyLogic Cloud uses its own experiment framework, which works on top of the model inputs and outputs. The inputs and outputs are defined in the Run Configuration object available in each model. This is how that object appears in the Activity Based Cost Analysis example model.

AnyLogic 8 Run Configuration editor illustrated

Run Configuration editor (click to enlarge)

Let’s see what we need to put in the Run Configuration editor to create an optimization experiment in the Cloud. First of all, there is the objective expression, and it should be encapsulated in the Output element.

The objective expression encapsulated in the Output element in the properties view

Output element with objective function

Model parameters always appear in the Run Configuration, so we don’t need to create new elements here.

The last thing is the queue size requirement, so we need to drag and drop the Output element (see the Analysis Palette) onto the Main canvas and specify the respective expression there.

The requirement expression encapsulated in the Output element in the properties view

Output element with requirement expression

So, once all the elements have been specified in the Run Configuration, they may be dragged into the relevant inputs and outputs sections.

The Run Configuration editor with completed inputs and outputs displayed for the experiment

Completed Run Configuration object (click to enlarge)

At this point, the model is ready to be exported to AnyLogic Cloud. You can export the model with the respective hyperlink in the Run Configuration properties. Once the model is exported, AnyLogic automatically opens a web page for the model in AnyLogic Cloud. After clicking on the simulation experiment in the experiment sidebar, you will see the following picture. As you can see, the Inputs section contains all four decision variables.

The simulation dashboard in AnyLogic Cloud showing the inputs

Simulation experiment dashboard in AnyLogic Cloud of a newly exported model

Let’s check if the Outputs section contains the objective and the requirement. Click the gear icon and check the list of outputs in the single value widget. Here is what it should look like.


A gif illustrating how to select single value outputs
Available single value outputs in AnyLogic Cloud

Since all required inputs and outputs are available, we are ready to design the optimization experiment in AnyLogic Cloud.

Designing an optimization experiment in AnyLogic Cloud

As with any experiment, an optimization experiment can be created with the New experiment button in the experiment sidebar. In the dialog window, let’s specify “Optimize total cost” as the experiment name and Optimization as the experiment type.

New experiment dialog window with the experiment type selected

New experiment dialog window

After the experiment has been created, click on the gear icon near the experiment name and configure the dashboard widgets for the Inputs, the Requirements, and the Outputs.

Inputs section settings

  1. Manufacturing parameters:
    • Resource A capacity – Discrete range
    • Resource B capacity – Discrete range
    • Mean processing time – Continuous range
    • Conveyor speed – Continuous range
  2. Requirements:
    • Queue size requirement – click the eye icon to make it visible

Inputs dashboard configuration with inputs and requirements highlighted

Inputs dashboard configuration (click to enlarge)

Outputs section settings

Here, there are two single-value widgets with the objective and the requirement outputs. In addition to these widgets, let’s create a bar chart to visualize the optimal input values. For this purpose, click the “Add output” button, and then in the appeared widget, specify the Bar chart type.

Then select one by one: Resource A capacity, Resource B capacity inputs, Mean processing time, and Conveyor speed inputs by clicking on the “Select input” drop-down list. Finally, specify the widget name as “Optimal inputs” and adjust the widget size. Here is how it may look.

Outputs dashboard configuration illustrated using a bar chart

Outputs dashboard configuration (click to enlarge)

Once the Inputs and Outputs of the experiment dashboard are configured, let’s save the experiment and proceed to the final stage of the experiment configuration – defining decision variables and other input values.

For the Resource Capacity inputs, let’s specify a range from 1 to 20 with step = 1. The Mean processing time is varied from 1 to 12, and the Conveyor speed is varied from 5 to 15. Other experiment settings are as follows.

  1. Experiment settings:
    • Objective: minimize output total cost per product
    • Number of iterations: 500
  2. Requirements:
    • Queue size requirement is less than or equal to 0

Here is how it will look.

Experiment dashboard with defined input values

Experiment dashboard with defined input values (click to enlarge)

Now we are ready to run the experiment.

Running an optimization experiment in AnyLogic Cloud

To run the experiment, just press the Run button in the Cloud toolbar and wait for the experiment to be completed. Under the hood, the Cloud interacts with the optimizer in the following way.

  1. The optimizer tells the Cloud the set of input values to be executed.
  2. The Cloud checks whether the runs have already been executed and whether the respective results are available in the database with all the runs results. If yes, the results are sent to the optimizer without re-running the model; otherwise, the Cloud executes a respective set of model runs.
  3. The optimizer processes the outputs and gives the Cloud another set of input values.
  4. The process is repeated until the number of iterations threshold is reached or the optimizer does not have a new set of input values to be executed.

Once the experiment is completed, you will see the outputs section filled with the experiment results.

optimization experiment results including a bar chart

Optimization experiment results (click to enlarge)

As with other experiments, Cloud stores the optimization experiment results. It means that if another user of the model creates (or configures) the same optimization experiment dashboard, then the experiment results will be instantly shown in the dashboard.

However, it is important to keep in mind that, due to the distributed nature of cloud computing, the optimizer does not guarantee reproducible run results. In this way, you may get different results even for the same set of input values, especially if the number of iterations is relatively small.

Processing the optimization experiment results in AnyLogic Cloud

Once the experiment has been completed, you will be able to:

  1. Play the animation with optimal input values.
  2. Download the optimization experiment results to an Excel file.
  3. Compare the experiment results with those of other experiments.

Let’s take a detailed look at these features.

When you play the model animation, you may pause the model execution as well as change something inside the model on the fly. This is crucial in case you want to research the solution proposed by the optimizer in depth and figure out if the solution truly works, or you need to reconsider the optimization task. Here is how the model animation is played for the optimal solution found in our Activity Based Cost Analysis example model.


A gif showing how sliders can be changed to adjust the model parameters
Optimal solution is played with animation. Sliders can be used to adjust the model parameters on the fly

An Excel file with results is great in case you want to export the results from the Cloud and post-process them with Excel instruments. The Excel file contains information about all model inputs and outputs according to the experiment dashboard configuration.

The optimal input values are available in the outputs sheet as well as in the respective output widget sheets. Here is the Excel file with the results of the optimization experiment we have executed.

An Excel spreasheet displaying results from the experiment

Excel file with results

The comparison of experiment results feature allows you to visualize the results of different experiments on the same screen. Mostly, it is used for estimating different solutions and scenarios. In the optimization experiment, the comparison feature is helpful for:

  1. Analyzing different optimization scenarios, e.g., comparing the solutions received with different sets of requirements.
  2. Analyzing other different scenarios, e.g., the results of a simple simulation experiment with optimization experiment results.

To do this, let’s create a copy of the optimization experiment. For this purpose, click on the Duplicate icon in the experiment dashboard. In the created experiment, specify the Queue size requirement as “Not restricted”, and run the experiment. Once the results are available, press the Compare button in the toolbar and select both Optimization experiments in the sidebar.

Comparison of two optimization experiments

Comparison of two optimization experiments (click to enlarge)

Optimizing stochastic models

Stochastic models are models that produce different results each time you run them with the same set of inputs but different random seed values. In this case, you need to execute the run with the same set of inputs several times to figure out the output variation.

For this purpose, you may use the optimization with replications experiment. Here, “replication” means the number of times you run the same set of inputs. In this case, the optimizer considers the mean value of the output among all the replications. So, the best solution is the set of replications (iteration) with the best mean value.

The experiment can be configured in a similar way to the regular optimization experiment. You will need to specify the number of replications per iteration. Since the result of the experiment is a set of runs, the set of output widgets available in the experiment differs from the regular optimization experiment.

To visualize the result, you can use a Scatter plot, Box plot, Density plot, Mean and error bar chart, Histogram, or Histogram 2D. The most convenient way to work with the experiment results is to represent the results using a Scatter plot or Box plot with an objective value. The Scatter plot shows all the objective values of all replications of the best iteration. If you hover over the data point, you will see the optimal input values.

A scatter blot illustrating results

Scatter plot in optimization with replications experiment

The Box plot aggregates the objective values and shows the min, max, mean, median, and Q1/Q3 quartiles. Since the optimizer chooses the best solution according to the mean value, it is helpful to check the mean value with the Box plot.

A box plot illustrating results

Box plot in optimization with replications experiment

These are all the notable differences between optimization and optimization with replications experiments. Of course, all the instruments mentioned in the previous section are available for the latter experiment type as well. You can play the best solution with animation (the initial random seed value is used in this case), download the Excel file with experiment results, and compare the experiment results with other experiments.

Moving forward

While the optimization experiment has been our focus in this blog, there are some other improvements in this latest release that you can find in the Release Notes. So, take a look and see what else we have added. We will be happy to receive your feedback about the optimization experiment, any of our other new features, and any ideas that you may have to improve the product.

We believe in evolving and staying ahead of the curve, so be sure to keep coming back to discover the latest exciting updates as we continue to add new innovative features to AnyLogic Cloud. And don't forget to subscribe to our monthly newsletter to stay up-to-date with new releases, blog posts, events, and more.

Related posts