Learning to extract symbolic knowledge from the World Wide Web
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
An Evaluation of Statistical Approaches to Text Categorization
Information Retrieval
Text Categorization with Suport Vector Machines: Learning with Many Relevant Features
ECML '98 Proceedings of the 10th European Conference on Machine Learning
A Rough Set-Based Hybrid Method to Text Categorization
WISE '01 Proceedings of the Second International Conference on Web Information Systems Engineering (WISE'01) Volume 1 - Volume 1
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Genetic algorithm-based feature set partitioning for classification problems
Pattern Recognition
Text classification: a recent overview
ICCOMP'05 Proceedings of the 9th WSEAS International Conference on Computers
Computational Statistics & Data Analysis
Nearest neighbor classification by relearning
IDEAL'09 Proceedings of the 10th international conference on Intelligent data engineering and automated learning
Text classification: combining grouping, LSA and kNN vs support vector machine
KES'06 Proceedings of the 10th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part II
Classification by weighting, similarity and kNN
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
The basic k-nearest neighbor classifier works well in text classification. However, improving performance of the classifier is still attractive. Combining multiple classifiers is an effective technique for improving accuracy. There are many general combining algorithms, such as Bagging, or Boosting that significantly improve the classifier such as decision trees, rule learners, or neural networks. Unfortunately, these combining methods do not improve the nearest neighbor classifiers. In this paper we present a new approach to general multiple reducts based on rough sets theory, in which we apply multiple reducts to improve the performance of the k-nearest neighbor classifier. This paper describes the proposed technique and provides experimental results.