Rough sets and intelligent data analysis
Information Sciences—Informatics and Computer Science: An International Journal
Variable Consistency Model of Dominance-Based Rough Sets Approach
RSCTC '00 Revised Papers from the Second International Conference on Rough Sets and Current Trends in Computing
Fundamenta Informaticae
Reduction and axiomization of covering generalized rough sets
Information Sciences: an International Journal
Topological approaches to covering rough sets
Information Sciences: an International Journal
Attribute reduction in decision-theoretic rough set models
Information Sciences: an International Journal
Neighborhood rough set based heterogeneous feature subset selection
Information Sciences: an International Journal
A hierarchical model for test-cost-sensitive decision systems
Information Sciences: an International Journal
Journal of Artificial Intelligence Research
Positive approximation: An accelerator for attribute reduction in rough set theory
Artificial Intelligence
Test-cost-sensitive attribute reduction
Information Sciences: an International Journal
Test cost constraint reduction with common cost
FGIT'11 Proceedings of the Third international conference on Future Generation Information Technology
Attribute reduction of data with error ranges and test costs
Information Sciences: an International Journal
Feature selection with test cost constraint
International Journal of Approximate Reasoning
Hi-index | 0.00 |
Cost-sensitive learning extends classical machine learning by considering various types of costs, such as test costs and misclassification costs, of the data. In many applications, there is a test cost constraint due to limited money, time, or other resources. It is necessary to deliberately choose a set of tests to preserve more useful information for classification. To cope with this issue, we define optimal sub-reducts with test cost constraint and a corresponding problem for finding them. The new problem is more general than two existing problems, namely the minimal test cost reduct problem and the 0-1 knapsack problem, therefore it is more challenging than both of them. We propose two exhaustive algorithms to deal with it. One is straightforward, and the other takes advantage of some properties of the problem. The efficiencies of these two algorithms are compared through experiments on the mushroom dataset. Some potential enhancements are also pointed out.