The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Machine Learning
Test structures for delay variability
Proceedings of the 8th ACM/IEEE international workshop on Timing issues in the specification and synthesis of digital systems
Debug methodology for the McKinley processor
Proceedings of the IEEE International Test Conference 2001
Fault diagnosis based on effect-cause analysis: An introduction
DAC '80 Proceedings of the 17th Design Automation Conference
Delay Defect Diagnosis Based Upon Statistical Timing Models " The First Step
DATE '03 Proceedings of the conference on Design, Automation and Test in Europe - Volume 1
Design-silicon timing correlation: a data mining perspective
Proceedings of the 44th annual Design Automation Conference
Silicon speedpath measurement and feedback into EDA flows
Proceedings of the 44th annual Design Automation Conference
Statistical framework for technology-model-product co-design and convergence
Proceedings of the 44th annual Design Automation Conference
Hi-index | 0.03 |
Explaining the mismatch between predicted timing behavior from modeling and simulation, and the observed timing behavior measured on silicon chips can be very challenging. Given a list of potential sources, the mismatch can be the aggregate result caused by some of them both individually and collectively, resulting in a very large search space. Furthermore, observed data are always corrupted by some unknown statistical random noises. In this paper, we examine how trying to explain the mismatch observed on silicon can be classified as an ill-posed problem, where ill posed means that the solution may not be unique or stable. Thus, a small change in the observed response can have a large change in the predicted solution. To solve ill-posed problems, a statistical learning theory uses a principle called regularization. This paper proposes using a statistical learning method called support vector (SV) analysis to statistically analyze all known sources of uncertainty with the objective to rank which sources contribute themost to the observed mismatch. Experimental results are presented under different error assumption models to compare two kinds of SV ranking approaches to four other ranking approaches, where some use the idea of regularization and others do not. This paper is concluded by showing a self cross-validation approach to validate the ranking results when there is no true ranking available, as the case with actual silicon.