Classification and knowledge discovery in protein databases

  • Authors:
  • Predrag Radivojac;Nitesh V. Chawla;A. Keith Dunker;Zoran Obradovic

  • Affiliations:
  • Indianna University School of Informatics and Center for Information Science and Technology, Temple University;Department of Computer Science and Engineering, University of Notre Dame and Customer Behavior Analytics, Canadian Imperial Bank of Commerce, Canada;Center for Computational Biology and Bioinformatics, Indiana University School of Medicine;Center for Information Science and Technology, Temple University

  • Venue:
  • Journal of Biomedical Informatics - Special issue: Biomedical machine learning
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of classification in noisy, high-dimensional, and class-imbalanced protein datasets. In order to design a complete classification system, we use a three-stage machine learning framework consisting of a feature selection stage, a method addressing noise and class-imbalance, and a method for combining biologically related tasks through a prior-knowledge based clustering. In the first stage, we employ Fisher's permutation test as a feature selection filter. Comparisons with the alternative criteria show that it may be favorable for typical protein datasets. In the second stage, noise and class imbalance are addressed by using minority class over-sampling, majority class under-sampling, and ensemble learning. The performance of logistic regression models, decision trees, and neural networks is systematically evaluated. The experimental results show that in many cases ensembles of logistic regression classifiers may outperform more expressive models due to their robustness to noise and low sample density in a high-dimensional feature space. However, ensembles of neural networks may be the best solution for large datasets. In the third stage, we use prior knowledge to partition unlabeled data such that the class distributions among non-overlapping clusters significantly differ. In our experiments, training classifiers specialized to the class distributions of each cluster resulted in a further decrease in classification error.