Accelerating RTL simulation with GPUs

  • Authors:
  • Hao Qian;Yangdong Deng

  • Affiliations:
  • Tsinghua University, Beijing, China;Tsinghua University, Beijing, China

  • Venue:
  • Proceedings of the International Conference on Computer-Aided Design
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

With the fast increasing complexity of integrated circuits, verification has become the bottleneck of today's IC design flow. In fact, over 70% of the IC design turn-around time can be spent on the verification process in a typical IC design project. Among various verification tasks, Register Transfer Level (RTL) simulation is the most widely used method to validate the correctness of digital IC designs. When simulating a large IC design with complicated internal behaviors (e.g., CPU cores running embedded software), RTL simulation can be extremely time consuming. Since RTL-to-layout is still the most prevalent IC design methodology, it is essential to speedup the RTL simulation process. Recently, General Purpose computing on Graphics Processing Units (GPGPU) is becoming a promising paradigm to accelerate computing-intensive workloads. A few recent works have demonstrated the effectiveness of using GPU to expedite gate and system level simulation tasks. In this work, we proposed an efficient GPU-accelerated RTL simulation framework. We introduce a methodology to translate Verilog RTL description into equivalent GPU source code so as to simulate circuit behavior on GPUs. In addition, a CMB based parallel simulation protocol is also adopted to provide a sufficient level of parallelism. Because RTL simulation lacks data-level parallelism, we also present a novel solution to use GPU as an efficient task-level parallel processor. Experimental results prove that our GPU based simulator outperforms a commercial sequential RTL simulator by over 20 fold.