Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
An Efficient Partitioning Algorithm for Distributed Virtual Environment Systems
IEEE Transactions on Parallel and Distributed Systems
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Proceedings of the ACM symposium on Virtual reality software and technology
Locality aware dynamic load management for massively multiplayer games
Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming
Improving the Performance of Distributed Virtual Environment Systems
IEEE Transactions on Parallel and Distributed Systems
An architecture to support scalable distributed virtual environment systems on grid
The Journal of Supercomputing
A reinforcement learning approach to dynamic resource allocation
Engineering Applications of Artificial Intelligence
A Latency-Aware Partitioning Method for Distributed Virtual Environment Systems
IEEE Transactions on Parallel and Distributed Systems
A reinforcement learning framework for online data migration in hierarchical storage systems
The Journal of Supercomputing
Dynamic Programming and Optimal Control, Vol. II
Dynamic Programming and Optimal Control, Vol. II
A dynamical adjustment partitioning algorithm for distributed virtual environment systems
VRCAI '08 Proceedings of The 7th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and Its Applications in Industry
A reinforcement learning framework for utility-based scheduling in resource-constrained systems
Future Generation Computer Systems
Hi-index | 0.00 |
A distributed virtual environment (DVE) consists of multiple network nodes (servers), each of which can host many users that consume CPU resources on that node and communicate with users on other nodes. Users can be dynamically migrated between the nodes, and the ultimate goal for the migration policy is to minimize the average system response time perceived by the users. In order to achieve this, the user migration policy should minimize network communication while balancing the load among the nodes so CPU resources of the individual nodes are not overwhelmed. This paper considers a multiplayer online game as an example of a DVE and presents an adaptive distributed user migration policy, which uses Reinforcement Learning to tune itself and thus minimize the average system response time perceived by the users. Performance of the self-tuning policy was compared on a simulator with the standard benchmark non-adaptive migration policy and with the optimal static user allocation policy in a variety of scenarios, and the self-tuning policy was shown to greatly outperform both benchmark policies, with performance difference increasing as the network became more overloaded. These results provide yet another demonstration of the power and generality of the methodology for designing adaptive distributed and scalable migration policies, which has already been applied successfully to several other domains [17, 18].