Analysis of the increase and decrease algorithms for congestion avoidance in computer networks
Computer Networks and ISDN Systems
A QoS-Provisioning neural fuzzy connection admission controller for multimedia high-speed networks
IEEE/ACM Transactions on Networking (TON)
Smooth trajectory tracking of three-link robot: a self-organizingCMAC approach
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A neural-fuzzy system for congestion control in ATM networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Value-function reinforcement learning in Markov games
Cognitive Systems Research
Congestion control mechanisms and the best effort service model
IEEE Network: The Magazine of Global Internetworking
Hi-index | 0.00 |
This paper proposes an adaptive reinforcement co-learning method for solving congestion control problems on high-speed networks. Conventional congestion control scheme regulates source rate by monitoring queue length restricted to a predefined threshold. However, the difficulty of obtaining complete statistics on input traffic to a network. As a result, it is not easy to accurately determine the effective thresholds for high-speed networks. We proposed a simple and robust Co-learning Multi-agent Congestion Controller (CMCC), which consists of two subsystems: a long-term policy evaluator and a short-term rate selector incorporated with a co-learning reinforcement signal to solve the problem. The well-trained controllers can adaptively take correct actions to regulate source flow under time-varying environments. Simulation results showed the proposed approach can promote the system utilization and decrease packet losses simultaneously.