A reinforcement learning based solution for cognitive network cooperation between co-located, heterogeneous wireless sensor networks

  • Authors:
  • Milos Rovcanin;Eli De Poorter;Ingrid Moerman;Piet Demeester

  • Affiliations:
  • -;-;-;-

  • Venue:
  • Ad Hoc Networks
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

Due to a drastic increase in the number of wireless communication devices, these devices are forced to interfere or interact with each other. This raises the issue of possible effects this coexistence might have on the performance of these networks. Negative effects are a consequence of contention for network resources (such as free wireless communication frequencies) between different devices, which can be avoided if co-located networks cooperate with each other and share the available resources. This paper presents a self-learning, cognitive cooperation approach for heterogeneous co-located networks. Cooperation is performed by activating or deactivating services such as interference avoidance, packet sharing, various MAC protocols, etc. Activation of a cooperative service might have both positive and negative effects on a network's performance, regarding its high level goals. Such a cooperation approach has to incorporate a reasoning mechanism, centralized or distributed, capable of determining the influence of each symbiotic service on the performance of all the participating sub-networks, taking into consideration their requirements. In this paper, a cooperation method incorporating a machine learning technique, known as the Least Squares Policy Iteration (LSPI), is proposed and discussed as a novel network cooperation paradigm.