Every joule is precious: the case for revisiting operating system design for energy efficiency

  • Authors:
  • Amin Vahdat;Alvin Lebeck;Carla Schlatter Ellis

  • Affiliations:
  • Duke University, Durham, NC;Duke University, Durham, NC;Duke University, Durham, NC

  • Venue:
  • EW 9 Proceedings of the 9th workshop on ACM SIGOPS European workshop: beyond the PC: new challenges for the operating system
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

By some estimates, there will be close to one billion wirelessdevices capable of Internet connectivity within five years,surpassing the installed base of traditional wired compute devices.These devices will take the form of cellular phones, personaldigital assistants (PDA's), embedded processors, and "Internetappliances". This proliferation of networked computing devices willenable a number of compelling applications, centering aroundubiquitous access to global information services, just in timedelivery of personalized content, and tight synchronization amongcompute devices/appliances in our everyday environment. However,one of the principal challenges of realizing this vision in thepost-PC environment is the need to reduce the energy consumed inusing these next-generation mobile and wireless devices, therebyextending the lifetime of the batteries that power them. While theprocessing power, memory, and network bandwidth of post-PC devicesare increasing exponentially, their battery capacity is improvingat a more modest pace.Thus, to ensure the utility of post-PC applications, it isimportant to develop low-level mechanisms and higher-level policiesto maximize energy efficiency. In this paper, we propose thesystematic re-examination of all aspects of operating system designand implementation from the point of view of energy efficiencyrather than the more traditional OS metric of maximizingperformance. In [7], we made the case for energy as a first-classOS-managed resource. We emphasized the benefits of higher-levelcontrol over energy usage policy and the application/OSinteractions required to achieve them. This paper explores theimplications that this major shift in focus can have upon theservices, policies, mechanisms, and internal structure of the OSitself based on our initial experiences with rethinking systemdesign for energy efficiency.Our ultimate goal is to design an operating system where majorcomponents cooperate to explicitly optimize for energy efficiency.A number of research efforts have recently investigated aspects ofenergy-efficient operating systems (a good overview is available at[16, 20]) and we intend to leverage existing "best practice" in ourown work where such results exist. However, we are not aware of anysystems that systematically revisit system structure with energy inmind. Further, our examination of operating system functionalityreveals a number of opportunities that have received littleattention in the literature. To illustrate this point, Table 1presents major operating system functionality, along with possibletechniques for improving power consumption characteristics. Severalof the techniques are well studied, such as disk spindown policiesor adaptively trading content fidelity for power [8]. For example,to reduce power consumption for MPEG playback, the system couldadapt to a smaller frame rate and window size, consuming lessbandwidth and computation.One of the primary objectives of operating systems is allocatingresources among competing tasks, typically for fairness andperformance. Adding energy efficiency to the equation raises anumber of interesting issues. For example, competingprocesses/users may be scheduled to receive a fair share ofbattery resources rather than CPU resources (e.g., anapplication that makes heavy use of DISK I/O may be given lowerpriority relative to a compute-bound application when energyresources are low). Similarly, for tasks such as ad hoc routing,local battery resources are often consumed on behalf of remoteprocesses. Fair allocation dictates that one battery is not drainedin preference to others. Finally, for the communication subsystem,a number of efforts already investigate adaptively setting thepolling rate for wireless networks (trading latency forenergy).Our efforts to date have focused on the last four areashighlighted in Table 1. For memory allocation, our work exploreshow to exploit the ability of memory chips to transition amongmultiple power states. We also investigate metrics for pickingenergy-efficient routes in ad hoc networks, energy-efficientplacement of distributed computation, and flexible RPC/name bindingthat accounts for power consumption.These last two points of resource allocation and remotecommunication highlight an interesting property for energy-aware OSdesign in the post-PC environment. Many tasks are distributedacross multiple machines, potentially running on machines withwidely varying CPU, memory, and power source characteristics. Thus,energy-aware OS design must closely cooperate with and track thecharacteristics of remote computers to balance the oftenconflicting goals of optimizing for energy and speed.The rest of this paper illustrates our approach with selectedexamples extracted from our recent efforts toward building anintegrated hardware/software infrastructure that incorporatescooperative power management to support mobile and wirelessapplications. The instances we present in subsequent sections coverthe resource management policies and mechanisms necessary toexploit low power modes of various (existing or proposed) hardwarecomponents, as well as power-aware communications and the essentialrole of the wide-area environment. We begin our discussion with theresources of a single machine and then extend it to the distributedcontext.