Apply Markov decision process to class-based packet buffer management

  • Authors:
  • Ching-Lung Chang;Jia-Kai Lin

  • Affiliations:
  • Department of Computer Science and Information Engineering, National Yunlin University of Science & Technology, Yunlin, Taiwan, R.O.C.;Department of Computer Science and Information Engineering, National Yunlin University of Science & Technology, Yunlin, Taiwan, R.O.C.

  • Venue:
  • ICCOM'06 Proceedings of the 10th WSEAS international conference on Communications
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the development of Internet and rapid growths of the network bandwidth, the quality of network transmission has a significant improvement. However, in the pluralistic network applications, the dramatic network bandwidth may be occupied by some of computers or users, such as ill-computer or improper peer-to-peer users. It affects the communication quality of service (QoS) of general network users or applications. How to deal with the unfair network bandwidth usage problem becomes a critical issue on network management. In general, the network bandwidth management can be realized by packet buffer management or output packet scheduling. In this paper, we formulate the packet admission control of buffer management as the Markov Decision Process (MDP) optimization problem. All of the received packets are classified into three types: real-time class, good-user class, and bad-user class. According to the QoS of packet class and traffic condition, the allowing policy derived by the MDP decides whether accept or discard the received packet for getting an optimal packet buffer management. With computer simulation, we demonstrate that the MDP-based buffer management can confine the buffer occupancy of bad-user class packet, and decrease the discarding rate of real-time class packet and good-user class packet significantly.