Large-margin feature selection for monotonic classification
Knowledge-Based Systems
Entropy measures and granularity measures for set-valued information systems
Information Sciences: an International Journal
Hierarchical model for rank discrimination measures
ECSQARU'13 Proceedings of the 12th European conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Feature selection for ordinal text classification
Neural Computation
Updating attribute reduction in incomplete decision systems with the variation of attribute set
International Journal of Approximate Reasoning
Hi-index | 0.00 |
Monotonic classification is a kind of special task in machine learning and pattern recognition. Monotonicity constraints between features and decision should be taken into account in these tasks. However, most existing techniques are not able to discover and represent the ordinal structures in monotonic datasets. Thus, they are inapplicable to monotonic classification. Feature selection has been proven effective in improving classification performance and avoiding overfitting. To the best of our knowledge, no technique has been specially designed to select features in monotonic classification until now. In this paper, we introduce a function, which is called rank mutual information, to evaluate monotonic consistency between features and decision in monotonic tasks. This function combines the advantages of dominance rough sets in reflecting ordinal structures and mutual information in terms of robustness. Then, rank mutual information is integrated with the search strategy of min-redundancy and max-relevance to compute optimal subsets of features. A collection of numerical experiments are given to show the effectiveness of the proposed technique.