Robust text classification using a hysteresis-driven extended SRN

  • Authors:
  • Garen Arevian;Christo Panchev

  • Affiliations:
  • University of Sunderland, School of Computing and Technology, Sunderland, United Kingdom;University of Sunderland, School of Computing and Technology, Sunderland, United Kingdom

  • Venue:
  • ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
  • Year:
  • 2007

Quantified Score

Hi-index 0.02

Visualization

Abstract

Recurrent Neural Network (RNN) models have been shown to perform well on artificial grammars for sequential classification tasks over long-term time-dependencies. However, there is a distinct lack of the application of RNNs to real-world text classification tasks. This paper presents results on the capabilities of extended two-context layer SRN models (xRNN) applied to the classification of the Reuters-21578 corpus. The results show that the introduction of high levels of noise to sequences of words in titles, where noise is defined as the unimportant stopwords found in natural language text, is very robustly handled by the classifiers which maintain consistent levels of performance. Comparisons are made with SRN and MLP models, as well as other existing classifiers for the text classification task.