Combining Multiple K-Nearest Neighbor Classifiers for Text Classification by Reducts
DS '02 Proceedings of the 5th International Conference on Discovery Science
Classification by weighting, similarity and kNN
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
International Journal of Learning Technology
Hi-index | 0.00 |
Since the k-nearest neighbor (kNN) classification is a simple and effective classification approach, it is well known in the data classification. However, improving performance of the classifier is still attractive to cope with the high accuracy processing. A tolerant rough set is considered as a basis of the classification of data. The data classification is realized by applying the kNN with distance function. To improve the classification accuracy, a distance function with weights is considered. Then, weights of the function are optimized by the genetic algorithm. After the learning of training data, an unknown data is classified by the kNN with distance function. To improve further the performance of the kNN classifier, a relearning method is proposed. The proposed relearning method shows a higher generalization accuracy when compared to the basic kNN with distance function and other conventional learning algorithms. Experiments have been conducted on some benchmark datasets from the UCI Machine Learning Repository.