Dynamic adaptation of user migration policies in distributed virtual environments

  • Authors:
  • David Vengerov

  • Affiliations:
  • Sun Microsystems Laboratories, Menlo Park, CA

  • Venue:
  • Dynamic adaptation of user migration policies in distributed virtual environments
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

A distributed virtual environment (DVE) consists of multiple network nodes (servers), each of which can host many users that consume CPU resources on that node and communicate with users on other nodes. Users can be dynamically migrated between the nodes, and the ultimate goal for the migration policy is to minimize the average system response time perceived by the users. In order to achieve this, the user migration policy should minimize network communication while balancing the load among the nodes so CPU resources of the individual nodes are not overwhelmed. This paper considers a multiplayer online game as an example of a DVE and presents an adaptive distributed user migration policy, which uses Reinforcement Learning to tune itself and thus minimize the average system response time perceived by the users. Performance of the self-tuning policy was compared on a simulator with the standard benchmark non-adaptive migration policy and with the optimal static user allocation policy in a variety of scenarios, and the self-tuning policy was shown to greatly outperform both benchmark policies, with performance difference increasing as the network became more overloaded. These results provide yet another demonstration of the power and generality of the methodology for designing adaptive distributed and scalable migration policies, which has already been applied successfully to several other domains [17, 18].