Ten Years of Building Broken Chips: The Physics and Engineering of Inexact Computing

  • Authors:
  • Krishna Palem;Avinash Lingamneni

  • Affiliations:
  • Rice University and Nanyang Technological University;Rice University and CSEM SA

  • Venue:
  • ACM Transactions on Embedded Computing Systems (TECS) - Special Section on Probabilistic Embedded Computing
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Well over a decade ago, many believed that an engine of growth driving the semiconductor and computing industries---captured nicely by Gordon Moore’s remarkable prophecy (Moore’s law)---was speeding towards a dangerous cliff-edge. Ranging from expressions of concern to doomsday scenarios, the exact time when serious hurdles would beset us varied quite a bit---some of the more optimistic warnings giving Moore’s law until. Needless to say, a lot of people have spent time and effort with great success to find ways for substantially extending the time when we would encounter the dreaded cliff-edge, if not avoiding it altogether. Faced with this issue, we started approaching this in a decidedly different manner---one which suggested falling off the metaphorical cliff as a design choice, but in a controlled way. This resulted in devices that could switch and produce bits that are correct, namely of having the intended value, only with a probabilistic guarantee. As a result, the results could in fact be incorrect. Such devices and associated circuits and computing structures are now broadly referred to as inexact designs, circuits, and architectures. In this article, we will crystallize the essence of inexactness dating back to 2002 through two key principles that we developed: (i) that of admitting error in a design in return for resource savings, and subsequently (ii) making resource investments in the elements of a hardware platform proportional to the value of information they compute. We will also give a broad overview of a range of inexact designs and hardware concepts that our group and other groups around the world have been developing since, based on these two principles. Despite not being deterministically precise, inexact designs can be significantly more efficient in the energy they consume, their speed of execution, and their area needs, which makes them attractive in application contexts that are resilient to error. Significantly, our development of inexactness will be contrasted against the rich backdrop of traditional approaches aimed at realizing reliable computing from unreliable elements, starting with von Neumann’s influential lectures and further developed by Shannon-Weaver and others.