The Positive Effects of Negative Information: Extending One-Class Classification Models in Binary Proteomic Sequence Classification

  • Authors:
  • Stefan Mutter;Bernhard Pfahringer;Geoffrey Holmes

  • Affiliations:
  • Department of Computer Science, The University of Waikato, Hamilton, New Zealand;Department of Computer Science, The University of Waikato, Hamilton, New Zealand;Department of Computer Science, The University of Waikato, Hamilton, New Zealand

  • Venue:
  • AI '09 Proceedings of the 22nd Australasian Joint Conference on Advances in Artificial Intelligence
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Profile Hidden Markov Models (PHMMs) have been widely used as models for Multiple Sequence Alignments. By their nature, they are generative one-class classifiers trained only on sequences belonging to the target class they represent. Nevertheless, they are often used to discriminate between classes. In this paper, we investigate the beneficial effects of information from non-target classes in discriminative tasks. Firstly, the traditional PHMM is extended to a new binary classifier. Secondly, we propose propositional representations of the original PHMM that capture information from target and non-target sequences and can be used with standard binary classifiers. Since PHMM training is time intensive, we investigate whether our approach allows the training of the PHMM to stop, before it is fully converged, without loss of predictive power.