Opportunities and challenges to unify workload, power, and cooling management in data centers
Proceedings of the Fifth International Workshop on Feedback Control Implementation and Design in Computing Systems and Networks
Opportunities and challenges to unify workload, power, and cooling management in data centers
ACM SIGOPS Operating Systems Review
A 'cool' load balancer for parallel applications
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
A swarm-inspired data center consolidation methodology
Proceedings of the 2nd International Conference on Web Intelligence, Mining and Semantics
Evolutionary multiobjective optimization for green clouds
Proceedings of the 14th annual conference companion on Genetic and evolutionary computation
Bio-inspired service management framework: green data-centres case study
International Journal of Grid and Utility Computing
Hi-index | 0.00 |
High density blade servers are a popular technology for data centers, however, the heat dissipation density of data centers increases exponentially. There is strong evidence to support that high temperatures of such data centers will lead to higher hardware failure rates and thus an increase in maintenance costs. Improperly designed or operated data centers may either suffer from overheated servers and potential system failures, or from overcooled systems, causing extraneous utilities cost. Minimizing the cost of operation (utilities, maintenance, device upgrade and replacement) of data centers is one of the key issues involved with both optimizing computing resources and maximizing business outcome. This paper proposes an analytical model, which describes data center resources with heat transfer properties and workloads with thermal features. Then a thermal aware task scheduling algorithm is presented which aims to reduce power consumption and temperatures in a data center. A simulation study is carried out to evaluate the performance of the algorithm. Simulation results show that our algorithm can significantly reduce temperatures in data centers by introducing endurable decline in performance.