A characterization of wordnet features in Boolean models for text classification

  • Authors:
  • Trevor Mansuy;Robert J. Hilderman

  • Affiliations:
  • University of Regina, Regina, Saskatchewan, Canada;University of Regina, Regina, Saskatchewan, Canada

  • Venue:
  • AusDM '06 Proceedings of the fifth Australasian conference on Data mining and analystics - Volume 61
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Supervised text classification is the task of automatically assigning a category label to a previously unlabeled text document. We start with a collection of pre-labeled examples whose assigned categories are used to build a predictive model for each category. In previous research, incorporating semantic features from the WordNet lexical database is one of many approaches that have been tried to improve the predictive accuracy of text classification models. The intuition is that words in the training set alone may not be extensive enough to enable the generation of a universal model for a category, but through Word-Net expansion (i.e., incorporating words defined by various relationships in WordNet), a more accurate model may be possible. In this paper, we report preliminary results obtained from a comprehensive study where WordNet features, part of speech tags, and term weighting schemes are incorporated into two-category text classification models generated by both a Naive Bayes text classifier and an SVM text classifier. We characterize the behaviour of these classifiers on fifteen document collections extracted from the Reuters-21578, USENET, DigiTrad, and 20-Newsgroups text corpora. Experimental results show that incorporating WordNet features, utilizing part of speech tags during WordNet expansion, and term weighting schemes have no positive effect on the accuracy of the Naive Bayes and SVM classifiers.