Movatterモバイル変換


[0]ホーム

URL:


Now on home page

ADS

RL-QN: A Reinforcement Learning Framework for Optimal Control of Queueing Systems

Abstract

With the rapid advance of information technology, network systems have become increasingly complex and hence the underlying system dynamics are often unknown or difficult to characterize. Finding a good network control policy is of significant importance to achieve desirable network performance (e.g., high throughput or low delay). In this work, we consider using model-based reinforcement learning (RL) to learn the optimal control policy for queueing networks so that the average job delay (or equivalently the average queue backlog) is minimized. Traditional approaches in RL, however, cannot handle the unbounded state spaces of the network control problem. To overcome this difficulty, we propose a new algorithm, called Reinforcement Learning for Queueing Networks (RL-QN), which applies model-based RL methods over a finite subset of the state space, while applying a known stabilizing policy for the rest of the states. We establish that the average queue backlog under RL-QN with an appropriately constructed subset can be arbitrarily close to the optimal result. We evaluate RL-QN in dynamic server allocation, routing and switching problems. Simulation results show that RL-QN minimizes the average queue backlog effectively.


Publication:
arXiv e-prints
Pub Date:
November 2020
DOI:

10.48550/arXiv.2011.07401

arXiv:
arXiv:2011.07401
Bibcode:
2020arXiv201107401L
Keywords:
  • Computer Science - Performance;
  • Computer Science - Machine Learning
full text sources
Preprint
|
🌓

[8]ページ先頭

©2009-2025 Movatter.jp