Instance-Based Learning Algorithms
Machine Learning
Selecting typical instances in instance-based learning
ML92 Proceedings of the ninth international workshop on Machine learning
A hybrid nearest-neighbor and nearest-hyperrectangle algorithm
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
On the boosting ability of top-down decision tree learning algorithms
STOC '96 Proceedings of the twenty-eighth annual ACM symposium on Theory of computing
Improved boosting algorithms using confidence-rated predictions
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
On the Consistency of Information Filters for Lazy Learning Algorithms
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Contribution of Boosting in Wrapper Models
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
On Feature Selection: A New Filter Model
Proceedings of the Twelfth International Florida Artificial Intelligence Research Society Conference
Prototype Selection for Composite Nearest Neighbor Classifiers TITLE2:
Prototype Selection for Composite Nearest Neighbor Classifiers TITLE2:
Rule induction and instance-based learning a unified approach
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Identifying and eliminating mislabeled training instances
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
While classical approaches deal with prototype selection (PS) using accuracy maximization, we investigate PS in this paper as an information preserving problem. We use information theory to build a statistical criterion from the nearest-neighbor topology. This statistical framework is used in a backward prototype selection algorithm (PSRCG). It consists in identifying and eliminating uninformative instances, and then reducing the global uncertainty of the learning set. We draw from experimental results and rigorous comparisons two main conclusions: (i) our approach provides a good compromise solution based on the requirement to keep a small number of prototypes, while not compromising the classification accuracy; (ii) our PSRCG algorithm seems to be robust in the presence of noise. Performances on several benchmarks tend to show the relevance and the effectiveness of our method in comparison with the classic PS algorithms based on the accuracy.