Overhead-Controlled routing in WSNs with reinforcement learning

  • Authors:
  • Leonardo R. S. Campos;Rodrigo D. Oliveira;Jorge D. Melo;Adrião D. Dória Neto

  • Affiliations:
  • Departamento de Engenharia de Computação e Automação, Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN, Brazil;Departamento de Engenharia de Computação e Automação, Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN, Brazil;Departamento de Engenharia de Computação e Automação, Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN, Brazil;Departamento de Engenharia de Computação e Automação, Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN, Brazil

  • Venue:
  • IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The use of wireless sensor networks in industry has been increased past few years, bringing multiple benefits compared to wired systems, like network flexibility and manageability. Such networks consist of a possibly large number of small and autonomous sensor and actuator devices with wireless communication capabilities. The data collected by sensors are sent -- directly or through intermediary nodes along the network -- to a base station called sink node. The data routing in this environment is an essential matter since it is strictly bounded to the energy efficiency, thus the network lifetime. This work investigates the application of a routing technique based on reinforcement learning's Q-learning algorithm to a wireless sensor network by using an NS-2 simulated environment. Several metrics like routing overhead, data packet delivery rates and delays are used to validate the proposal comparing it with another solutions existing in the literature.