DAC '98 Proceedings of the 35th annual Design Automation Conference
Validating the intel pentium 4 microprocessor
Proceedings of the 38th annual Design Automation Conference
Coverage directed test generation for functional verification using bayesian networks
Proceedings of the 40th annual Design Automation Conference
A Functional Validation Technique: Biased-Random Simulation Guided by Observability-Based Coverage
ICCD '01 Proceedings of the International Conference on Computer Design: VLSI in Computers & Processors
Industrial experience with test generation languages for processor verification
Proceedings of the 41st annual Design Automation Conference
Functional verification of the POWER4 microprocessor and POWER4 multiprocessor systems
IBM Journal of Research and Development
Depth-driven verification of simultaneous interfaces
ASP-DAC '06 Proceedings of the 2006 Asia and South Pacific Design Automation Conference
Functional test generation using property decompositions for validation of pipelined processors
Proceedings of the conference on Design, automation and test in Europe: Proceedings
Practical methods in coverage-oriented verification of the merom microprocessor
Proceedings of the 43rd annual Design Automation Conference
A Survey of Hybrid Techniques for Functional Verification
IEEE Design & Test
Automatic verification of external interrupt behaviors for microprocessor design
Proceedings of the 44th annual Design Automation Conference
Specification-driven directed test generation for validation of pipelined processors
ACM Transactions on Design Automation of Electronic Systems (TODAES)
Random stimulus generation using entropy and XOR constraints
Proceedings of the conference on Design, automation and test in Europe
Towards Automating Simulation-Based Design Verification Using ILP
Inductive Logic Programming
Enhancing bug hunting using high-level symbolic simulation
Proceedings of the 19th ACM Great Lakes symposium on VLSI
Functional test generation using design and property decomposition techniques
ACM Transactions on Embedded Computing Systems (TECS)
Logic synthesis and circuit customization using extensive external don't-cares
ACM Transactions on Design Automation of Electronic Systems (TODAES)
An abstraction-guided simulation approach using Markov models for microprocessor verification
Proceedings of the Conference on Design, Automation and Test in Europe
Customizing IP cores for system-on-chip designs using extensive external don't-cares
Proceedings of the Conference on Design, Automation and Test in Europe
Coverage-Directed Test Generation Automated by Machine Learning -- A Review
ACM Transactions on Design Automation of Electronic Systems (TODAES)
Manipulation of Training Sets for Improving Data Mining Coverage-Driven Verification
Journal of Electronic Testing: Theory and Applications
Hi-index | 0.00 |
The challenge of verifying a modern microprocessor design is an overwhelming one: Increasingly complex micro-architectures combined with heavy time-to-market pressure have forced microprocessor vendors to employ immense verification teams in the hope of finding the most critical bugs in a timely manner. Unfortunately, too often size doesn't seem to matter for verification teams, as design schedules continue to slip and microprocessors find their way to the marketplace with design errors. In this paper, we describe a simulationbased random test generation tool, called StressTest, that provides assistance in locating hard-to-find corner-case design bugs and performance problems. StressTest is based on a Markov-model-driven random instruction generator with activity monitors. The model is generated from the userspecified template programs and is used to generate the instructions sent to the design under test (DUT). In addition, the user specifies key activity points within the design that should be stressed and monitored throughout the simulation. The StressTest engine then uses closed-loop feedback techniques to transform the Markov model into one that effectively stresses the points of interest. In parallel, StressTest monitors the correctness of the DUT response to the supplied stimuli, and if the design behaves unexpectedly, a bug and a trace that leads to it are reported. Using two micro-architectures as example testbeds, we demonstrate that StressTest finds more bugs with less effort than open-loop random instruction test generation techniques.