Exponential bounds of mean error for the nearest neighbor estimates of regression functions
Journal of Multivariate Analysis
Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
A Nearest Hyperrectangle Learning Method
Machine Learning
The nature of statistical learning theory
The nature of statistical learning theory
An equivalence between sparse approximation and support vector machines
Neural Computation
Advances in kernel methods: support vector learning
Advances in kernel methods: support vector learning
Generalization performance of support vector machines and other pattern classifiers
Advances in kernel methods
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Pattern Recognition with Fuzzy Objective Function Algorithms
Pattern Recognition with Fuzzy Objective Function Algorithms
Advances in Instance Selection for Instance-Based Learning Algorithms
Data Mining and Knowledge Discovery
Fuzzy C-Means Clustering Algorithm Based on Kernel Method
ICCIMA '03 Proceedings of the 5th International Conference on Computational Intelligence and Multimedia Applications
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
A bi-prototype theory of facial attractiveness
Neural Computation
Techniques for solving the large-scale classification problem in Chinese handwriting recognition
SACH'06 Proceedings of the 2006 conference on Arabic and Chinese handwriting recognition
Prototype sample selection based on minimization of the complete cross validation functional
Pattern Recognition and Image Analysis
Hi-index | 0.00 |
In this paper, we propose a number of adaptive prototype learning (APL) algorithms. They employ the same algorithmic scheme to determine the number and location of prototypes, but differ in the use of samples or the weighted averages of samples as prototypes, and also in the assumption of distance measures. To understand these algorithms from a theoretical viewpoint, we address their convergence properties, as well as their consistency under certain conditions. We also present a soft version of APL, in which a non-zero training error is allowed in order to enhance the generalization power of the resultant classifier. Applying the proposed algorithms to twelve UCI benchmark data sets, we demonstrate that they outperform many instance-based learning algorithms, the k-nearest neighbor rule, and support vector machines in terms of average test accuracy.