Enhancing network performance in Distributed Cognitive Radio Networks using single-agent and multi-agent Reinforcement Learning

  • Authors:
  • Yau, Kok-Lim Alvin Yau;Komisarczuk, Peter Komisarczuk;Paul, D. Teal Paul

  • Affiliations:
  • School of Engineering and Computer Science, Victoria University of Wellington, New Zealand;School of Computing and Technology, Thames Valley University, UK;School of Engineering and Computer Science, Victoria University of Wellington, New Zealand

  • Venue:
  • LCN '10 Proceedings of the 2010 IEEE 35th Conference on Local Computer Networks
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cognitive Radio (CR) is a next-generation wireless communication system that enables unlicensed users to exploit underutilized licensed spectrum to optimize the utilization of the overall radio spectrum. A Distributed Cognitive Radio Network (DCRN) is a distributed wireless network established by a number of unlicensed users in the absence of fixed network infrastructure such as a base station. Context awareness and intelligence are the capabilities that enable each unlicensed user to observe and carry out its own action as part of the joint action on its operating environment for network-wide performance enhancement. These capabilities can be applied in various application schemes in CR networks such as Dynamic Channel Selection (DCS), congestion control, and scheduling. In this paper, we apply Reinforcement Learning (RL), including single-agent and multi-agent approaches, to achieve context awareness and intelligence. Firstly, we show that the RL approach achieves a joint action that provides better network-wide performance in respect to DCS in DCRNs. The multi-agent approach is shown to provide higher levels of stability compared to the single-agent approach. Secondly, we show that RL achieves high level of fairness. Thirdly, we show the effects of network density and various essential parameters in RL on the network-wide performance.