Neurocomputing: foundations of research
Neurocomputing: foundations of research
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fundamentals of Artificial Neural Networks
Fundamentals of Artificial Neural Networks
Ensembles of Learning Machines
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Classifier Combinations: Implementations and Theoretical Issues
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
Feature Selection Using a Hybrid Associative Classifier with Masking Techniques
MICAI '06 Proceedings of the Fifth Mexican International Conference on Artificial Intelligence
IEEE Transactions on Computers
Feature selection by maximum marginal diversity: optimality and implications for visual recognition
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
Bidirectional associative memories: Different approaches
ACM Computing Surveys (CSUR)
Hi-index | 0.04 |
Performance in most pattern classifiers is improved when redundant or irrelevant features are removed, however, this is mainly achieved by high demanding computational methods or successive classifiers construction. This paper shows how Associative Memories can be used to get a mask value which represents a subset of features that clearly identifies irrelevant or redundant information for classification purposes, therefore, classification accuracy is improved while significant computational costs in the learning phase are reduced. An optimal subset of features allows register size optimization, which contributes not only to significant power savings but to a smaller amount of synthesized logic, furthermore, improved hardware architectures are achieved due to functional units size reduction, as a result, it is possible to implement parallel and cascade schemes for pattern classifiers on the same ASIC.