Deep reinforcement learning for energy-efficient workflow scheduling in edge computing
Wen, M., Liu, X., Ning, X., Liu, C., Chen, X.
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1016/j.comnet.2025.111790 Abstract/SummaryWorkflow scheduling in dynamic edge computing environments faces challenges in minimizing completion time and energy consumption due to unpredictable workloads and limited resources. However, traditional methods cannot adapt to dynamic environmental change and often suffer from high computational complexity. We propose DQN-Edge, an efficient scheduling method using an attention-based Deep Q-Network (DQN) to learn optimal task prioritization and task allocation policies. DQN-Edge’s two-phase approach first prioritizes tasks using a modified upward ranking algorithm considering critical path dependencies, then employs a DQN with a context-aware attention mechanism to balance time and energy weights adaptively. We conducted a comprehensive evaluation of the proposed DQN-Edge by using real-world scientific workflows under various conditions, such as different arrival intervals of workflows and edge nodes computing capabilities. Compared with existing methods, DQN-Edge shortens makespan and reduces energy consumption across different scenarios while maintaining a high success rate.
Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |
Lists
Lists