Nash Q-learning multi-agent flow control for high-speed networks

  • Authors:
  • Yuanwei Jing;Xin Li;Georgi M. Dimirovski;Yan Zheng;Siying Zhang

  • Affiliations:
  • Faculty of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, P.R. of China;Faculty of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, P.R. of China;Faculty of Engineering, Computer Engg. Dept, Dogus University of Istanbul, Istanbul, Rep. of Turkey;Faculty of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, P.R. of China;Faculty of Information Science and Engineering, Northeastern University, Shenyang, Liaoning, P.R. of China

  • Venue:
  • ACC'09 Proceedings of the 2009 conference on American Control Conference
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

For the congestion problems in high-speed networks, a multi-agent flow controller (MFC) based on Q-learning algorithm conjunction with the theory of Nash equilibrium is proposed. Because of the uncertainties and highly time-varying, it is not easy to accurately obtain the complete information for high-speed networks, especially for the multi-bottleneck case. The Nash Q-learning algorithm, which is independent of mathematic model, shows the particular superiority in high-speed networks. It obtains the Nash Q-values through trial-and-error and interaction with the network environment to improve its behavior policy. By means of learning procedures, MFCs can learn to take the best actions to regulate source flow with the features of high throughput and low packet loss ratio. Simulation results show that the proposed method can promote the performance of the networks and avoid the occurrence of congestion effectively.