Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Joint call admission control algorithms: Requirements, approaches, and design considerations
Computer Communications
Multiservice allocation for multiaccess wireless systems
IEEE Transactions on Wireless Communications
Cognitive networks: adaptation and learning to achieve end-to-end performance objectives
IEEE Communications Magazine
Computer Networks: The International Journal of Computer and Telecommunications Networking
Hi-index | 0.00 |
The limited availability of frequency bands and their capacity limitations, together with the constantly increasing demand for high-bit-rate services in wireless communication systems, require the use of smart radio resource management strategies to ensure that different services are provided with the required quality of service (QoS) and that the available radio resources are used efficiently. In addition, the evolution of technology toward higher spectral efficiency has led to the introduction of Orthogonal Frequency-Division Multiple Access (OFDMA) by 3GPP for use in future long-term evolution (LTE) systems. However, given the current penetration of legacy technologies such as Universal Mobile Telecommunications System (UMTS), operators will face some periods in which both Radio Access Technologies (RATs) coexist. In this context, Joint Radio Resource Management (JRRM) mechanisms are helpful because they enable complementarities between different RATs to be exploited and thus facilitate more efficient use of available radio resources. This paper proposes a novel dynamic JRRM algorithm for LTE-UMTS coexistence scenarios based on Reinforcement Learning (RL), which is considered to be a good candidate for achieving the desired degree of flexibility and adaptability in future reconfigurable networks. The proposed algorithm is evaluated in dynamic environments under different load conditions and is compared with various baseline solutions.