Bayes Error Estimation Using Parzen and k-NN Procedures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rectified nearest feature line segment for pattern classification
Pattern Recognition
Practical implementation of LMMSE demosaicing using luminance and chrominance spaces
Computer Vision and Image Understanding
Contextual and isolated algorithms for multistage pattern recognition
SMO'07 Proceedings of the 7th WSEAS International Conference on Simulation, Modelling and Optimization
Hi-index | 0.14 |
The use of nonparametric error estimates may lead to biased results if the kernel covariances are estimated from the same data as are used to form the error estimate. If additional design samples are available, one may eliminate this bias by estimating the class covariances using an independent set of data. If, however, additional samples are not available, one may resort to leave-one-out type estimates of the kernel (for Parzen estimates) or metric (for nearest-neighbor estimates) for every sample being tested. The authors present an efficient algorithm for computation of these leave-one-out type estimates that requires little additional computational burden over procedures currently in use. The presentation is applicable to both Parzen and k-nearest neighbor (k-NN) type estimates. Experimental results demonstrating the efficiency of the algorithm are provided.