Computer Science > Information Theory
arXiv:2303.00958 (cs)
[Submitted on 2 Mar 2023 (v1), last revised 14 Sep 2023 (this version, v2)]
Title:A Deep Reinforcement Learning-Based Resource Scheduler for Massive MIMO Networks
View a PDF of the paper titled A Deep Reinforcement Learning-Based Resource Scheduler for Massive MIMO Networks, by Qing An and 4 other authors
View PDFAbstract:The large number of antennas in massive MIMO systems allows the base station to communicate with multiple users at the same time and frequency resource with multi-user beamforming. However, highly correlated user channels could drastically impede the spectral efficiency that multi-user beamforming can achieve. As such, it is critical for the base station to schedule a suitable group of users in each time and frequency resource block to achieve maximum spectral efficiency while adhering to fairness constraints among the users. In this paper, we consider the resource scheduling problem for massive MIMO systems with its optimal solution known to be NP-hard. Inspired by recent achievements in deep reinforcement learning (DRL) to solve problems with large action sets, we propose \name{}, a dynamic scheduler for massive MIMO based on the state-of-the-art Soft Actor-Critic (SAC) DRL model and the K-Nearest Neighbors (KNN) algorithm. Through comprehensive simulations using realistic massive MIMO channel models as well as real-world datasets from channel measurement experiments, we demonstrate the effectiveness of our proposed model in various channel conditions. Our results show that our proposed model performs very close to the optimal proportionally fair (Opt-PF) scheduler in terms of spectral efficiency and fairness with more than one order of magnitude lower computational complexity in medium network sizes where Opt-PF is computationally feasible. Our results also show the feasibility and high performance of our proposed scheduler in networks with a large number of users and resource blocks.
Comments: | IEEE Transactions on Machine Learning in Communications and Networking (TMLCN) 2023 |
Subjects: | Information Theory (cs.IT) |
Cite as: | arXiv:2303.00958 [cs.IT] |
(orarXiv:2303.00958v2 [cs.IT] for this version) | |
https://doi.org/10.48550/arXiv.2303.00958 arXiv-issued DOI via DataCite | |
Related DOI: | https://doi.org/10.1109/TMLCN.2023.3313988 DOI(s) linking to related resources |
Submission history
From: Qing An [view email][v1] Thu, 2 Mar 2023 04:28:27 UTC (6,604 KB)
[v2] Thu, 14 Sep 2023 03:34:00 UTC (6,632 KB)
Full-text links:
Access Paper:
- View PDF
- TeX Source
- Other Formats
View a PDF of the paper titled A Deep Reinforcement Learning-Based Resource Scheduler for Massive MIMO Networks, by Qing An and 4 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.