Machine Learning - Special issue on learning with probabilistic representations
Explicitly representing expected cost: an alternative to ROC representation
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
An extensive empirical study of feature selection metrics for text classification
The Journal of Machine Learning Research
Capturing best practice for microarray gene expression data analysis
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
ACM SIGKDD Explorations Newsletter
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Cooperative E-Organizations for Distributed Bioinformatics Experiments
IDEAL '08 Proceedings of the 9th International Conference on Intelligent Data Engineering and Automated Learning
A Wrapper Method for Feature Selection in Multiple Classes Datasets
IWANN '09 Proceedings of the 10th International Work-Conference on Artificial Neural Networks: Part I: Bio-Inspired Systems: Computational and Ambient Intelligence
A Framework for Multi-class Learning in Micro-array Data Analysis
AIME '09 Proceedings of the 12th Conference on Artificial Intelligence in Medicine: Artificial Intelligence in Medicine
Cost-sensitive classifier evaluation using cost curves
PAKDD'08 Proceedings of the 12th Pacific-Asia conference on Advances in knowledge discovery and data mining
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Classification of micro-array data has been studied extensively but only a small amount of research work has been done on classification of microarray data involving more than two classes. This paper proposes a learning strategy that deals with building a multi-target classifier and takes advantage from well known data mining techniques. To address the intrinsic difficulty of selecting features in order to promote the classification accuracy, the paper considers the use of a set of binary classifiers each of ones is devoted to predict a single class of the multi-classification problem. These classifiers are similar to local experts whose knowledge (about the features that are most correlated to each class value) is taken into account by the learning strategy for selecting an optimal set of features. Results of the experiments performed on a publicly available dataset demonstrate the feasibility of the proposed approach.