DuMES: Deep reinforcement learning based EV charging scheduling with dual-layer safety modules
Zhang, A., Liu, C., Makantasis, K., Chen, X.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1049/ses2.70017 Abstract/SummaryDeep reinforcement learning (DRL) has become a promising approach for electric vehicle (EV) charging scheduling. However, its practical deployment poses potential risks to power infrastructure. DRL relies on trial‐and‐error interactions during training to approximate optimal policies, which may lead to unsafe decisions. To address this, a novel framework called dual‐layer safety modules for EV charging scheduling (DuMES) is proposed. This framework introduces a decision‐level safety layer into the conventional DRL architecture that adaptively detects and replaces unsafe actions. Furthermore, by integrating dual safety layers with reward shaping, the framework promotes convergence between raw and safe actions. This enhances training efficiency while ensuring power system stability during both training and deployment phases. The method was evaluated through simulation experiments on a charging station equipped with renewable energy and energy storage system (ESS). Comparative analyses with baseline methods demonstrate that DuMES effectively satisfies user charging demands, reduces operational costs and ensures compliance with safety constraints
Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |
Lists
Lists