Challenges
Element AI turns cutting-edge AI research and industry expertise into software solutions that exponentially learn and improve. The company was looking into potential ways to combine simulation with AI and identified three challenges to address with the assistance of AnyLogic:
- Can simulation help generate valuable datasets to pre-train an AI model?
- Can AI help improve the behavior of agents in the simulation?
- Can reinforcement learning techniques be applied to real industry use cases?
It’s important to underline that deep learning usually requires very large datasets for model training to be successful. Rather than building extensive rule-based systems, neural networks, a family of models used in AI, learn from vast amounts of data by making decisions that are in accordance with as many data points as possible. However, the data can be affected by several issues:
- be insufficient or missing,
- be biased,
- be sensitive/private and thus require anonymization,
- take a long time to procure access to (for client engagements),
- require cleaning,
- require pre-processing to match the AI model.
For all these reasons, when an AI team wants to engage with a client or build a core capability, it needs to figure out data requirements early in the project, either through a data audit phase or a data gathering and labelling phase. This step can take a long time and be quite costly. Therefore, it was the first challenge for Element AI to tackle with simulation.
As a second challenge, Element AI looked at how AI can help simulation make better decisions and discovered a few interesting applications:
- AI can act as a brain for the simulation agent so it behaves with better insights,
- Simulation can be a testing ground to compare AI models,
- Simulation can help build a visual demonstration of the added value of an AI solution,
- Simulation can help prepare AI for irregular operation scenarios at a much lower cost,
- Simulation can be used by domain and technical experts to reach a common understanding of a problem and de-risk a project early,
- Simulation can be combined with AI in a digital twin environment.
Finally, because of the untapped value listed in the first two challenges above, Element AI decided to delay industry testing of reinforcement learning techniques for a later date and case study.
Solution
In order to tackle the identified challenges, Element AI selected an industry use case that would benefit from the use of the simulation ideas highlighted above. The company focused on replicating the operations of a grocery store; more specifically, the focus was put on product demand forecasting and employee task prioritization for shelf replenishment.
The first objective was to generate 5-years’ worth of minute-by-minute product demand data with significant variability, noise and irregular events. The key was to create enough data for the time series forecasting (AI) algorithm to learn from at a complexity level that warrants the use of AI over traditional rule-based formulas.
A known risk with this approach is that an AI model can overfit to the simulated data and fail to generalize when new parameters are inserted or when compared to real-life data. This is generally overcome or minimized with domain randomization.
The second objective was to use AI to guide employee task prioritization in the simulation. More specifically, help virtual employees (agents) who are responsible for the grocery store shelf replenishment tasks know what product they should prioritize to avoid or minimize costly out-of-stock events.
To investigate these two objectives, the grocery store simulation model included three basic agent types, each with a set of randomly seeded parameters, to ensure the desired level of complexity.
1. Clients modelled as pedestrian agents with various:
- walking speeds
- product shopping lists
- arrival rates (per hour, day of week, month of year, year to year, null on national holidays)
- cart abandonment tolerances
2. Product categories with various:
- demand distributions
- availability
- restock thresholds
- time required to restock
- prices & margins
- physical locations in the store
- weekly promotions
3. Employees with various:
- roles
- availability schedules
- task priorities
Finally, Element AI synchronized their AI models with the simulation execution.
The developed solution involved four simple steps:
- run a few minutes/hours of the simulation
- pause the simulation to output the stock levels of each product to a text file
- raise a flag for the external AI module to:
- process the information
- return a list of prioritized products to restock (tasks) in a text file
- resume the simulation with the employees working based on the new priorities
The main benefit of this approach is that it is agnostic to the level of complexity of the AI and the coding language (Python in this case), at the cost of having to briefly pause the simulation at regular intervals.
Outcome
The 5-years of data generated from simulation allowed Element AI's scientists to train time series forecasting models for minute-by-minute product demand. This was done using a split of the data, where the first four years were used to train the AI, and the fifth year was used to test the accuracy of the forecast.
The baseline for hourly product demand forecast was set as Lag-0 which predicts that the previous hour will repeat itself. The accuracy of other time series models was then evaluated against that baseline.
The results listed in this table illustrate that a store manager trying to predict what will be sold in the next hour would be 61% accurate if he/she used the past hour as a reference. This same store owner would be up to 80% accurate if he/she leveraged an AI demand forecasting tool.
However, this conclusion comes with a caveat. Although there were multiple sources of variability and complexity introduced in the simulation model, the generated data still lacked realism.
For example, there were no irregular events that could force the store to close for a period of time, there were no employees who failed to show up for work, there was always enough product in the back store to refill the shelves, and ultimately the demand forecast problem was simplified by avoiding the introduction of new products.
For these reasons, it could prove tricky to use the data generated by this simulation for data augmentation because the AI has not learned to handle additional noise and could be unable to adapt to real-world data. This is where domain randomization techniques and recent progress in sim-to-real or transfer learning would be beneficial.
However, even if it can’t guarantee the results once exposed to real data, the AI trained using this simulation approach can aid researchers in ruling out unsuitable forecasting models and evaluate whether a use case would benefit from additional sensors or data sources.
As identified in the second objective, the simulation model also allowed Element AI to compare the impact of various AI policies for task prioritization based on a set of metrics per defined time period (day, week, month, year):
- total revenue
- total profits
- total clients wait time
- total out of stock events
- total revenue lost from out of stock events
- total abandoned carts & items
The queue policy means that products are restocked in the order that they run out. The other policies are then compared to that baseline to evaluate the optimal policy for profits but can also be used to optimize for other KPIs like employee time utilization. As it turns out, under the defined parameters, the prioritization of tasks has far less impact than the ability to forecast demand, but different parameters or datasets might lead to a different conclusion.
In the end, simulation was used to generate data to improve the forecasting ability of the AI and as a testbed for different AI agent policies. Once deployed in a retail store, this solution could help a store manager gain a better understanding of how many products are expected to sell per hour and where employees should focus their shelf replenishment efforts.
Overall, this project allowed Element AI to become familiar with the world of discrete event and agent-based simulation, to engage clients in a new way, and to generate data for internal teams. Most importantly, the use of simulation gave Element AI a building block to be leveraged when tackling more complex future projects that involve reinforcement learning, sim-to-real, transfer learning, and digital twinning.
Discover more about combining simulation and artificial intelligence.