A Balanced Approach to High-Level Verification: Performance Trade-Offs in Verifying Large-Scale Multiprocessors

  • Authors:
  • Dennis Abts;Mike Roberts;David J. Lilja

  • Affiliations:
  • -;-;-

  • Venue:
  • ICPP '00 Proceedings of the Proceedings of the 2000 International Conference on Parallel Processing
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

A single node of a modern scalable multiprocessor consists of several ASICs comprising tens of millions of gates. This level of integration and complexity imposes an enormous onus on the verification process. A variety of tools, ranging from discrete-event logic simulation to formal model checking, can be used to attack this problem. Unfortunately, conventional simulation techniques, with their primitive interface to the hardware (i.e. test vectors), are inadequate tools for reasoning about the correctness of complex architectural features, such as cache coherence protocols and memory consistency models. Similarly, model checkers offer very limited utility on such large designs. We have previously proposed [1] a novel verification framework, called Raven that addresses many of these challenges. In this paper, we examine the performance implications of verifying systems at higher levels of abstraction. A detailed performance analysis is conducted to compare this higher-level approach against an equivalent Verilog test bench. We establish lower and upper bounds on the performance of the Raven environment executing on a single-processor, on a set of distributed processors, and on a shared-memory multiprocessor.