A Power-Aware Run-Time System for High-Performance Computing
SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
Just In Time Dynamic Voltage Scaling: Exploiting Inter-Node Slack to Save Energy in MPI Programs
SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
SC '05 Proceedings of the 2005 ACM/IEEE conference on Supercomputing
PowerNap: eliminating server idle power
Proceedings of the 14th international conference on Architectural support for programming languages and operating systems
Memory MISER: Improving Main Memory Energy Efficiency in Servers
IEEE Transactions on Computers
A framework for dynamic adaptation of power-aware server clusters
Proceedings of the 2009 ACM symposium on Applied Computing
Making cluster applications energy-aware
ACDC '09 Proceedings of the 1st workshop on Automated control for datacenters and clouds
Optimal power allocation in server farms
Proceedings of the eleventh international joint conference on Measurement and modeling of computer systems
Optimal Power Management for Server Farm to Support Green Computing
CCGRID '09 Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid
Resource Allocation Using Virtual Clusters
CCGRID '09 Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid
Cutting the electric bill for internet-scale systems
Proceedings of the ACM SIGCOMM 2009 conference on Data communication
Power routing: dynamic power provisioning in the data center
Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems
Joint optimization of idle and cooling power in data centers while maintaining response time
Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems
Energy saving and network performance: a trade-off approach
Proceedings of the 1st International Conference on Energy-Efficient Computing and Networking
Energy-aware traffic engineering
Proceedings of the 1st International Conference on Energy-Efficient Computing and Networking
Hi-index | 0.00 |
With a constant increase of servers worldwide, estimated in 2010 to be "50 million servers in the world today", Napier, A. L. [1]; the power needed to run server farms being "over 1% of the world-wide electricity consumption", Fettweis and Zimmermann [2]. This is inevitably coupled with more heat dissipation, leading to a cooling problem that constitutes 200% of the direct power consumption in server farms, Schott [3]. Considering the fact that "most servers are running at 5-15% of their capacity" Siebert [4]; many worldwide developments in technologies and methodologies were directed towards reducing power consumption in server farms; rather than tackling the most imperative problem of under utilization. The mathematical model presented in this research aims at reducing the power consumption by minimizing the number servers (and ancillary equipment), that need to be on, while meeting the required demand. The model guarantees arriving at the minimal operating power. Applying the proposed approach to three formulated examples resulted in reducing the percentage of idle servers from 7.3%, to 2.1% and then to 0% idle servers, respectively.