Decision assistance agent in real–time simulation
Urban society relies heavily on critical infrastructure (CI) such as power and water systems. The anticipated prosperity and the national security of society depend on the ability to understand, measure and analyse the vulnerabilities and interdependencies of this system of infrastructures. Only then can emergency responders (ER) react quickly and effectively to any major disruption that the system might face. In this paper, we propose a model to train a reinforcement learning (RL) agent that is able to optimise resource usage following an infrastructure disruption. The novelty of our approach is the use of dynamic programming techniques to build an agent that is able to learn from experience, where the experience is generated by a simulator. The goal of the agent is to maximise an output, which in our case is the number of discharged patients (DP) from hospitals or on–site emergency units. We show that by exposing such an intelligent agent to a large sequence of simulated disaster scenarios, we can capture enough experience to enable the agent to make informed decisions.
Keywords: artificial intelligence, critical infrastructures, disaster response, i2Sim real–time simulator, reinforcement learning agents, responsive crisis management, resource allocation, agent–based modelling, decision support systems, DSS, simulation, agent–based systems, multi–agent systems, MAS, emergency response, emergency management, resource usage optimisation, infrastructure disruption, dynamic programming, intelligent agents