C4.5: programs for machine learning
C4.5: programs for machine learning
Some new results on neural network approximation
Neural Networks
Machine Learning
Machine Learning
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Cancer diagnosis data, for example microarray gene expression profiling data and proteomic profiling data, are often described by thousands of features. To computationally make a diagnosis for new samples, these data are usually input to a learning algorithm, the algorithm then induces a classifier, the classifier then predicts a class label for any test sample. As the data is so high-dimensional, most of the resulting classifiers are very complicated particularly those based on kernel-functions such as support vector machines---the interpretation of the decision results must need all the features to be involved. In this paper, we discuss built-in features and use them to concisely characterize the data and to easily interpret the decisions. Built-in features are features that are used only in the classifiers, and that are only a small subset of the original features, e.g., the features in a decision tree. So, the notion of built-in features is different from input features and also from original features. As there is a significant reduction from the huge size of original features to a small number of relevant features, the complexity of the interpretation can be much eased. The use of built-in features also provides much potential for elucidating the translation between raw data and clinically useful knowledge. In this paper, we also report that the performance of classifiers using built-in features tends to remain stable even input feature space changes, but other types of classifiers fluctuate their performance. So, once again, we promote the use of classifiers that use built-in features since the algorithms can avoid the existing hard problem of selecting best number of features for learning.