Reinforcing probabilistic selective Quality of Service routes in dynamic irregular networks

  • Authors:
  • Abdelhamid Mellouk;Saïd Hoceïni;Mustapha Cheurfa

  • Affiliations:
  • LISSI/SCTIC Laboratory, IUT Creteil-Vitry, University Paris XII, 122, rue Paul Armangot, Vitry sur Seine 94400, France;LISSI/SCTIC Laboratory, IUT Creteil-Vitry, University Paris XII, 122, rue Paul Armangot, Vitry sur Seine 94400, France;LISSI/SCTIC Laboratory, IUT Creteil-Vitry, University Paris XII, 122, rue Paul Armangot, Vitry sur Seine 94400, France

  • Venue:
  • Computer Communications
  • Year:
  • 2008

Quantified Score

Hi-index 0.26

Visualization

Abstract

In the context of modern high-speed internet network, routing is often complicated by the notion of guaranteed Quality of Service (QoS), which can either be related to time, packet loss or bandwidth requirements: constraints related to various types of QoS make some routing unacceptable. Due to emerging real-time and multimedia applications, efficient routing of information packets in dynamically changing communication network requires that as the load levels, traffic patterns and topology of the network change, the routing policy also adapts. We focused in this paper on QoS based routing by developing a neuro-dynamic programming to construct dynamic state-dependent routing policies. We propose an approach based on adaptive algorithm for packet routing using reinforcement learning called N best optimal path Q-routing algorithm (NOQRA) which optimizes two criteria: cumulative cost path (or hop count if each link cost=1) and end-to-end delay. A load balancing policy depending on a dynamical traffic path probability distribution function is also defined and embodied in NOQRA to characterize the distribution of the traffic over the N best paths. Numerical results obtained with OPNET simulator for different packet interarrival times statistical distributions with different levels of traffic's load show that NOQRA gives better results compared to standard optimal path routing algorithms.