IEEE Transactions on Pattern Analysis and Machine Intelligence
Instance-Based Learning Algorithms
Machine Learning
Reduction Techniques for Instance-BasedLearning Algorithms
Machine Learning
Advances in Instance Selection for Instance-Based Learning Algorithms
Data Mining and Knowledge Discovery
Computer
Input Feature Selection by Mutual Information Based on Parzen Window
IEEE Transactions on Pattern Analysis and Machine Intelligence
Remembering to Add: Competence-preserving Case-Addition Policies for Case Base Maintenance
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Fast time series classification using numerosity reduction
ICML '06 Proceedings of the 23rd international conference on Machine learning
Prototype selection for dissimilarity-based classifiers
Pattern Recognition
Using evolutionary algorithms as instance selection for data reduction in KDD: an experimental study
IEEE Transactions on Evolutionary Computation
Incremental Exemplar Learning Schemes for Classification on Embedded Devices
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Hi-index | 0.00 |
Although memory-based classifiers offer robust classification performance, their widespread usage on embedded devices is hindered due to the device's limited memory resources. Moreover, embedded devices often operate in an environment where data exhibits evolutionary changes which entails frequent update of the in-memory training data. A viable option for dealing with the memory constraint is to use Exemplar Learning (EL) schemes that learn a small memory set (called the exemplar set) of high functional information that fits in memory. However, traditional EL schemes have several drawbacks that make them inapplicable for embedded devices; (1) they have high memory overheads and are unable to handle incremental updates to the exemplar set, (2) they cannot be customized to obtain exemplar sets of any user-defined size that fits in the memory and (3) they learn exemplar sets based on local neighborhood structures that do not offer robust classification performance. In this paper, we propose two novel EL schemes, $\mathsf{EBEL}$ (Entropy-Based Exemplar Learning) and $\mathsf{ABEL}$ (AUC-Based Exemplar Learning) that overcome the aforementioned short-comings of traditional EL algorithms. We show that our schemes efficiently incorporate new training datasets while maintaining high quality exemplar sets of any user-defined size. We present a comprehensive experimental analysis showing excellent classification-accuracy versus memory-usage tradeoffs using our proposed methods.