Accessibility navigation


Deep reinforcement learning for energy-efficient workflow scheduling in edge computing

Wen, M., Liu, X., Ning, X., Liu, C., Chen, X. ORCID: https://orcid.org/0000-0001-9267-355X, Nian, J. and Cheng, L. (2026) Deep reinforcement learning for energy-efficient workflow scheduling in edge computing. Computer Networks, 274. 111790. ISSN 1389-1286

[thumbnail of 2025_DQN_Edge.pdf] Text - Accepted Version
· Restricted to Repository staff only
· The Copyright of this document has not been checked yet. This may affect its availability.

1MB

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

To link to this item DOI: 10.1016/j.comnet.2025.111790

Abstract/Summary

Workflow scheduling in dynamic edge computing environments faces challenges in minimizing completion time and energy consumption due to unpredictable workloads and limited resources. However, traditional methods cannot adapt to dynamic environmental change and often suffer from high computational complexity. We propose DQN-Edge, an efficient scheduling method using an attention-based Deep Q-Network (DQN) to learn optimal task prioritization and task allocation policies. DQN-Edge’s two-phase approach first prioritizes tasks using a modified upward ranking algorithm considering critical path dependencies, then employs a DQN with a context-aware attention mechanism to balance time and energy weights adaptively. We conducted a comprehensive evaluation of the proposed DQN-Edge by using real-world scientific workflows under various conditions, such as different arrival intervals of workflows and edge nodes computing capabilities. Compared with existing methods, DQN-Edge shortens makespan and reduces energy consumption across different scenarios while maintaining a high success rate.

Item Type:Article
Refereed:Yes
Divisions:Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
ID Code:125502
Publisher:Elsevier

University Staff: Request a correction | Centaur Editors: Update this record

Page navigation