The Use of Background Knowledge in Decision Tree Induction
Machine Learning
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
MetaCost: a general method for making classifiers cost-sensitive
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Inducing Cost-Sensitive Trees via Instance Weighting
PKDD '98 Proceedings of the Second European Symposium on Principles of Data Mining and Knowledge Discovery
Mining Frequent Patterns without Candidate Generation: A Frequent-Pattern Tree Approach
Data Mining and Knowledge Discovery
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Test-Cost Sensitive Naive Bayes Classification
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Learning policies for sequential time and cost sensitive classification
UBDM '05 Proceedings of the 1st international workshop on Utility-based data mining
Test-Cost Sensitive Classification on Data with Missing Values
IEEE Transactions on Knowledge and Data Engineering
Test Strategies for Cost-Sensitive Decision Trees
IEEE Transactions on Knowledge and Data Engineering
The foundations of cost-sensitive learning
IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
Much work [1] [2] has been done to deal with the test-cost sensitive learning on data with missing values. Most of the previous works only focus on the cost while ignore the importance of time. In this paper, we address how to choose the unknown attributes to be tested in the limited time in order to minimize the total cost. We propose a multibatch strategy applying on test-cost sensitive Naïve Bayes classifier and evaluate its performance on several data sets. We build graphs from attributes and it includes the vertices cost and set cost. Then we use randomized algorithm to select the unknown attributes in each testing cycle. From the results of the experiments, our algorithms significantly outperforms previous algorithms[3][4].