Instance-Based Learning Algorithms
Machine Learning
Vector quantization and signal compression
Vector quantization and signal compression
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Making large-scale support vector machine learning practical
Advances in kernel methods
A study of support vectors on model independent example selection
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Handling concept drifts in incremental learning with support vector machines
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
An efficient and scalable data compression approach to classification
ACM SIGKDD Explorations Newsletter - Special issue on “Scalable data mining algorithms”
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Vector Quantization Technique for Nonparametric Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Quantizing for minimum average misclassification risk
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The paper illustrates a data compression approach to classification, based on a stochastic gradient algorithm for the minimization of the average misclassification risk performed by a Labeled Vector Quantizer. The main properties of the approach can be summarized in terms of both the efficiency of the learning process, and the efficiency and accuracy of the classification process. The approach is compared with the strictly related nearest neighbor rule, and with two data reduction algorithms, SVM and IB2, on a set of real data experiments taken from the UCI repository.