Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
C4.5: programs for machine learning
C4.5: programs for machine learning
Rough Sets: Theoretical Aspects of Reasoning about Data
Rough Sets: Theoretical Aspects of Reasoning about Data
Machine Learning
Incomplete Information: Rough Set Analysis
Incomplete Information: Rough Set Analysis
DRSA decision algorithm analysis in stylometric processing of literary texts
RSCTC'10 Proceedings of the 7th international conference on Rough sets and current trends in computing
Projected Gustafson-Kessel clustering algorithm and its convergence
Transactions on rough sets XIV
Rough set-based analysis of characteristic features for ANN classifier
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part I
Hi-index | 0.00 |
The last two decades have seen many powerful classification systems being built for large-scale real-world applications. However, for all their accuracy, one of the persistent obstacles facing these systems is that of data dimensionality. To enable such systems to be effective, a redundancy-removing step is usually required to pre-process the given data. Rough set theory offers a useful, and formal, methodology that can be employed to reduce the dimensionality of datasets. It helps select the most information rich features in a dataset, without transforming the data, all the while attempting to minimise information loss during the selection process. Based on this observation, this paper discusses an approach for semantics-preserving dimensionality reduction, or feature selection, that simplifies domains to aid in developing fuzzy or neural classifiers. Computationally, the approach is highly efficient, relying on simple set operations only. The success of this work is illustrated by applying it to addressing two real-world problems: industrial plant monitoring and medical image analysis.