Cost-aware cloud workflow scheduling using DRL and simulated annealingGu, Y., Cheng, F., Yang, L., Xu, J. ORCID: https://orcid.org/0009-0009-2964-8971, Chen, X. ORCID: https://orcid.org/0000-0001-9267-355X and Cheng, L. ORCID: https://orcid.org/0000-0003-1638-059X (2024) Cost-aware cloud workflow scheduling using DRL and simulated annealing. Digital Communications and Networks. ISSN 2352-8648
It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing. To link to this item DOI: 10.1016/j.dcan.2023.12.009 Abstract/SummaryCloud workloads are highly dynamic and complex, making task scheduling in cloud computing a challenging problem. While several scheduling algorithms have been proposed in recent years, they are mainly designed to handle batch tasks and not well-suited for real-time workloads. To address this issue, researchers have started exploring the use of Deep Reinforcement Learning (DRL). However, the existing models are limited in handling independent tasks and cannot process workflows, which are prevalent in cloud computing and consist of related subtasks. In this paper, we propose SA-DQN, a scheduling approach specifically designed for real-time cloud workflows. Our approach seamlessly integrates the Simulated Annealing (SA) algorithm and Deep Q-Network (DQN) algorithm. The SA algorithm is employed to determine an optimal execution order of subtasks in a cloud server, serving as a crucial feature of the task for the neural network to learn. We provide a detailed design of our approach and show that SA-DQN outperforms existing algorithms in terms of handling real-time cloud workflows through experimental results.
Altmetric Deposit Details University Staff: Request a correction | Centaur Editors: Update this record |