Queuing theoretic approach to server allocation problem in time-delay cloud computing systems

  • Authors:
  • Taichi Kusaka;Takashi Okuda;Tetsuo Ideguchi;Xuejun Tian

  • Affiliations:
  • Aichi Prefectural University, Nagakute-cho, Aichi, Japan;Aichi Prefectural University, Nagakute-cho, Aichi, Japan;Aichi Prefectural University, Nagakute-cho, Aichi, Japan;Aichi Prefectural University, Nagakute-cho, Aichi, Japan

  • Venue:
  • Proceedings of the 23rd International Teletraffic Congress
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Cloud computing is a popular computing model to support processing large volumetric data using clusters of commodity computers. It aims to power the next generation data centers and enables application service providers to lease data center capabilities for deploying applications depending on user QoS (Quality of Service) requirements. Because cloud applications have different composition, configuration, and deployment requirements, quantifying the performance of resource allocation policies and application scheduling algorithms, is important in cloud computing environments for different application and service models under varying load, network time- delay and system size. To obtain quantifying, the authors apply VCHS (Various Customers, Heterogeneous Servers) queuing systems.