Error-rate testing to improve yield for error-tolerant applications

  • Authors:
  • Sandeep K. Gupta;Shideh M. Shahidi

  • Affiliations:
  • University of Southern California;University of Southern California

  • Venue:
  • Error-rate testing to improve yield for error-tolerant applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

VLSI scaling has entered an era where achieving desired yields is becoming increasingly challenging. The concept of error tolerance has been previously proposed with the goal of reversing this trend for classes of systems which do not require completely error-free operation. Such systems include audio, speech, video, graphics, and digital communications. Analysis of such applications has identified error rate as one of the key metrics of error severity. Error rate is defined as the percentage of clock cycles for which the value at the outputs deviates from the corresponding error-free value. An error tolerant application provides a threshold error rate. Chips with error rate less than the threshold are considered acceptable and can be used; other chips must be discarded. In order to maximize the yield gain by error tolerance a test must discard all unacceptable chips while discarding no acceptable chips. Our main objective in error-rate testing is to detect all unacceptable faults while not detecting any acceptable faults. Maintaining test generation and application times comparable to classical testing is our second objective. We prove that in arbitrary circuits the main objective is not always achievable. However, we develop a test generator that minimizes the number of acceptable faults that are detected. Our results show that it is possible to discard all chips with an unacceptable fault while discarding only a small percentage of chips with an acceptable fault. We introduce the new notion of multi-vector testing where testing is performed using a set of test sessions, each including multiple vectors. We redefine the conditions under which a chip is accepted or discarded, and prove that using this new notion, our main objective of error-rate testing is achievable for all fault models. We theoretically derive a universal upper bound on the number of required vectors. This large upper bound is an overhead in conflict with our second objective. Therefore, we deploy modeled faults and a structural approach to achieve our main objective via fewer vectors. Our results confirm that the promise of error tolerance, namely higher yields can be achieved at little to no compromise in costs.