C4.5: programs for machine learning
C4.5: programs for machine learning
Random sampling for histogram construction: how much is enough?
SIGMOD '98 Proceedings of the 1998 ACM SIGMOD international conference on Management of data
User defined coverage—a tool supported methodology for design verification
DAC '98 Proceedings of the 35th annual Design Automation Conference
Hole analysis for functional coverage data
Proceedings of the 39th annual Design Automation Conference
Coverage directed test generation for functional verification using bayesian networks
Proceedings of the 40th annual Design Automation Conference
Writing Testbenches: Functional Verification of HDL Models, Second Edition
Writing Testbenches: Functional Verification of HDL Models, Second Edition
Automatic Test Program Generation: A Case Study
IEEE Design & Test
Defining coverage views to improve functional coverage analysis
Proceedings of the 41st annual Design Automation Conference
Decomposition Methodology For Knowledge Discovery And Data Mining: Theory And Applications (Machine Perception and Artificial Intelligence)
StressTest: an automatic approach to test generation via activity monitors
Proceedings of the 42nd annual Design Automation Conference
CODES+ISSS '05 Proceedings of the 3rd IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis
HLDVT '03 Proceedings of the Eighth IEEE International Workshop on High-Level Design Validation and Test Workshop
Comprehensive Functional Verification: The Complete Industry Cycle (Systems on Silicon)
Comprehensive Functional Verification: The Complete Industry Cycle (Systems on Silicon)
Functional test selection based on unsupervised support vector analysis
Proceedings of the 45th annual Design Automation Conference
Proceedings of the 2009 Asia and South Pacific Design Automation Conference
Functional Verification Coverage Measurement and Analysis
Functional Verification Coverage Measurement and Analysis
SMOTE: synthetic minority over-sampling technique
Journal of Artificial Intelligence Research
Journal of Electronic Testing: Theory and Applications
Hi-index | 0.00 |
The constant pressure for making functional verification more agile has led to the conception of coverage driven verification (CDV) techniques. CDV has been implemented in verification testbenches using supervised learning techniques to model the relationship between coverage events and stimuli generation, providing a feedback between them. One commonly used technique is the classification- or decision-tree data mining, which has shown to be appropriate due to the easy modeling. Learning techniques are applied in two steps: training and application. Training is made on one or more sets of examples, which relate datasets to pre-determined classes. Precision of results by applying the predictive learning concept has shown to be sensitive to the size of the training set and the amount of imbalance of associated classes, this last meaning the number of datasets associated to each class is very different from each other. This work presents experiments on the manipulation of data mining training sets, by changing the size and reducing the imbalances, in order to check their influence on the CDV efficiency. To do that, a circuit example with a large input space and strong class imbalance was selected from the application domain of multimedia systems and another one, with a small input space that affects the coverage occurrences, was selected from the communication area.