UCL Discovery
UCL home » Library Services » Electronic resources » UCL Discovery

RLQ: Workload Allocation With Reinforcement Learning in Distributed Queues

Staffolani, Alessandro; Darvariu, Victor-Alexandru; Bellavista, Paolo; Musolesi, Mirco; (2023) RLQ: Workload Allocation With Reinforcement Learning in Distributed Queues. IEEE Transactions on Parallel and Distributed Systems , 34 (3) pp. 856-868. 10.1109/tpds.2022.3231981. Green open access

[thumbnail of tpds23_rlq.pdf]
Preview
Text
tpds23_rlq.pdf - Accepted Version

Download (3MB) | Preview

Abstract

Distributed workload queues are nowadays widely used due to their significant advantages in terms of decoupling, resilience, and scaling. Task allocation to worker nodes in distributed queue systems is typically simplistic (e.g., Least Recently Used) or uses hand-crafted heuristics that require task-specific information (e.g., task resource demands or expected time of execution). When such task information is not available and worker node capabilities are not homogeneous, the existing placement strategies may lead to unnecessarily large execution timings and usage costs. In this work, we formulate the task allocation problem in the Markov Decision Process framework, in which an agent assigns tasks to an available resource, and receives a numerical reward signal upon task completion. Our adaptive and learning-based task allocation solution, Reinforcement Learning based Queues (RLQ), is implemented and integrated with the popular Celery task queuing system for Python. We compare RLQ against traditional solutions using both synthetic and real workload traces. On average, using synthetic workloads, RLQ reduces the execution cost by approximately 70%, the execution time by a factor of at least 3×, and the waiting time by almost 7×. Using real traces, we observe an improvement of about 20% for execution cost, around 70% improvement for execution time, and a reduction of approximately 20× in waiting time. We also compare RLQ with a strategy inspired by E-PVM, a state-of-the-art solution used in Google's Borg cluster manager, showing we are able to outperform it in five out of six scenarios.

Type: Article
Title: RLQ: Workload Allocation With Reinforcement Learning in Distributed Queues
Open access status: An open access version is available from UCL Discovery
DOI: 10.1109/tpds.2022.3231981
Publisher version: https://doi.org/10.1109/TPDS.2022.3231981
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: task allocation, reinforcement learning, distributed task queuing
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery.ucl.ac.uk/id/eprint/10164859
Downloads since deposit
130Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item