Bias of Nearest Neighbor Error Estimates
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bayes Error Estimation Using Parzen and k-NN Procedures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Effects of Sample Size in Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
How tight are the Vapnik-Chervonenkis bounds?
Neural Computation
Neural Computation
When Are k-Nearest Neighbor and Back Propagation Accurate for Feasible Sized Sets of Examples?
Proceedings of the EURASIP Workshop 1990 on Neural Networks
Fuzzy Sets and Systems - Featured Issue: Selected papers from ACIDCA 2000
Input Feature Selection by Mutual Information Based on Parzen Window
IEEE Transactions on Pattern Analysis and Machine Intelligence
A local mean-based nonparametric classifier
Pattern Recognition Letters
A reliable method for cell phenotype image classification
Artificial Intelligence in Medicine
Hi-index | 0.14 |
It is widely believed in the pattern recognition field that when a fixed number of training samples is used to design a classifier, the generalization error of the classifier tends to increase as the number of features gets large. In this paper, we will discuss the generalization error of the artificial neural network (ANN) classifiers in high-dimensional spaces, under a practical condition that the ratio of the training sample size to the dimensionality is small. Experimental results show that the generalization error of ANN classifiers seems much less sensitive to the feature size than 1-NN, Parzen and quadratic classifiers.