Introduction
Modern cities rely on resilient infrastructure to withstand disruptions and ensure the continued operation of critical services such as water distribution and transportation. However, the interdependencies between these systems make them vulnerable to cascading failures, particularly under extreme weather events or budgetary constraints. Traditional optimization approaches often fail to capture the complex, dynamic interactions that influence infrastructure recovery.
This study presents a multimethod simulation approach integrating system dynamics (SD) and agent-based modeling (ABM) to model financial resource management and network resilience dynamics. Additionally, deep reinforcement learning (DRL) is applied to optimize restoration scheduling, ensuring that infrastructure systems recover efficiently. The approach is tested on Tampa, Florida's water and mobility networks, demonstrating its effectiveness in improving infrastructure resilience.
Using discrete-event simulation, the study aims to highlight where energy savings can be achieved, helping to reduce the industry’s carbon footprint. With these observations, the industry can move towards reducing energy use while supporting technological advancements and semiconductor sustainability.
Simulation model
The proposed framework integrates two modeling techniques and a deep learning algorithm to enhance resilient infrastructure planning:
1. System dynamics (SD) model
The SD model captures the financial resource allocation between the water and mobility departments, influencing maintenance activities. It models budget inflows (e.g., federal funding) and outflows (e.g., repair costs), ensuring realistic constraints on daily restoration efforts.
2. Agent-based model (ABM)
Simulates physical network components (pipes and roads) as agents that fail randomly or due to interdependencies. Maintenance crews travel using Dijkstra’s shortest path algorithm to repair failed components, while traffic agents represent mobility constraints. The ABM was built using AnyLogic, leveraging its capability to integrate SD and ABM for resilient infrastructure planning.
3. Multimethod simulation using deep reinforcement learning (DRL)
Optimizes the sequence of repairs by training a Deep Q-Network (DQN) to maximize resilience. The Markov Decision Process (MDP) framework defines states (failed components), actions (repair sequences), and rewards (network resilience improvements). The model learns optimal scheduling strategies through iterative training.
Results
The multimethod simulation model was tested under various failure rates (5% and 10%) and financial constraints to evaluate its impact on infrastructure resilience. Simulation results showed:
- Financial allocation: a $5M budget with a 23:77 water-to-mobility ratio maximized overall resilience.
- Restoration scheduling: the DRL-based strategy restored infrastructure six days faster than a FIFO (first in, first out) approach.
- Resilient infrastructure performance: under FIFO, full restoration took 36 days, while DRL-optimized scheduling completed recovery in 30 days, with a first priority on critical infrastructure repairs.
This study highlights the effectiveness of multimethod simulation and reinforcement learning in resilient infrastructure planning. The integration of AnyLogic’s simulation modeling with DRL optimization enables cities to develop strategic, data-driven restoration plans that reduce downtime and enhance infrastructure resilience.