Models for energy-efficient approximate computing

  • Authors:
  • Ravi Nair

  • Affiliations:
  • IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA

  • Venue:
  • Proceedings of the 16th ACM/IEEE international symposium on Low power electronics and design
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

We are at the threshold of an explosion in new data, produced not only by large, powerful scientific and commercial computers, but also by billions of low-power devices. The traditional techniques of processing such information by first storing them in databases and then manipulating and serving them through large computers are becoming too expensive. These complex large systems have a high acquisition cost, but in addition, suffer also from high running costs, especially in power consumption. Both these costs can be contained by recognizing that there is a precision implied by traditional computing that is not needed in the processing of most new types of data. The relaxation of precision can help in the wider exploitation of known energy-efficient modes of computing like throughput computing. More important, such relaxation provides us an opportunity to deploy in the processing of this vast new data the same low-power, low-cost technology that was used to generate the data in the first place. Such energy-efficient circuits suffer from greater unreliability and variability in performance when used in the high-throughput mode, but these problems can be addressed by changing the way we design such systems, changing the nature of the algorithms for such systems, and by modifying the expectation of the quality of results produced by such systems. We have called this the approximate computing paradigm. There are two sources of imperfection in approximate computing. The first arises from imperfect execution of an algorithm. The second arises from imperfection in the data stream itself. All these imperfections could potentially be rectified through the use of expensive techniques such as redundancy, conservative design, or conservative device operating range. The goal of approximate computing, however, is to combat these sources of imperfection inexpensively and in an energy-efficient manner while producing results that may be different, yet acceptable. Computing models that achieve this goal have to address both the detection and the correction of such imperfections. The detection of such imperfections can be done either by the user observing and reacting to a wrong result, by the algorithm expecting a range of correct results, or by the run-time monitoring of the execution of the system. The correction of system behavior can be done either by attempting a different algorithm, by patching the code, or by repeating the execution. We will argue in this talk that future systems will need to combine all these techniques and integrate new ones into a single dynamically optimized system that employs feedback from the user to guide the high-level choice of energy-efficient algorithms, and that employs prediction based on past experience to guide the low-level energy-efficient execution of the system. This has a tantalizing similarity to some models of functioning of a remarkably efficient approximate computing appliance we all know -- the human brain.