Strategizing Equitable Transit Evacuations: A Data-driven Reinforcement Learning Approach

Abstract: 

As natural disasters become increasingly frequent, the need for efficient and equitable evacuation planning has become more critical. This paper proposes a data-driven, reinforcement learning (RL)-based framework to optimize public transit operations for bus-based evacuations in transportation networks with an emphasis on improving both efficiency and equity. We model the evacuation problem as a Markov Decision Process (MDP) solved by RL, using real-time transit data from General Transit Feed Specification (GTFS) and transportation networks extracted from OpenStreetMap (OSM). The RL agent dynamically reroutes buses from their scheduled location to minimize total passenger travel time and vehicle routing costs while ensuring equitable transit service distribution across communities. Simulations on the San Francisco Bay Area transportation network indicate that the proposed framework achieves significant improvements in both evacuation efficiency and equitable service distribution compared to traditional rule-based and random strategies. These results highlight the potential of RL to enhance system performance and urban resilience during emergency evacuations, offering a scalable solution for real-world applications in intelligent transportation systems.

Author: 
Tang, Fang
Wang, Han
Delle Monache, Maria Laura
Publication date: 
November 1, 2025
Publication type: 
Journal Article
Citation: 
Tang, F., Wang, H., & Delle Monache, M. L. (2025). Strategizing Equitable Transit Evacuations: A Data-driven Reinforcement Learning Approach. Transportation Research Part C: Emerging Technologies, 180. https://doi.org/10.1016/j.trc.2025.105342