Glaucus: predicting computing-intensive program's performance for cloud customers

  • Authors:
  • Xia Liu;Zhigang Zhou;Xiaojiang Du;Hongli Zhang;Junchao Wu

  • Affiliations:
  • School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China;School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China;Department of Computer and Information Sciences, Temple University, Philadelphia, PA;School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China;School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China

  • Venue:
  • ICIC'13 Proceedings of the 9th international conference on Intelligent Computing Theories
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

As Cloud computing has gained much popularity recently, many organizations consider transmitting their large-scale computing-intensive programs to cloud. However, cloud service market is still in its infant stage. Many companies offer a variety of cloud computing services with different pricing schemes, while customers have the demand of "spending the least, gaining the most". It makes a challenge which cloud service provider is more suitable for their programs and how much computing resource should be purchased. To address this issue, in this paper, we present a performance prediction scheme for computing-intensive program on cloud. The basic idea is to map program into an abstract tree, and create a miniature version program, and insert checkpoints in head and tail for each computable independent unit, which record the beginning & end timestamp. Then we use the method of dynamic analysis, run the miniature version program on small data locally, and predict the whole program's cost on cloud. We find several features which have close relationship with program's performance, and through analyzing these features we can predict program's cost on the cloud. Our real-network experiments show that the scheme can achieve high prediction accuracy with low overhead.