Biterm language models for document retrieval
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Efficient Phrase-Based Document Indexing for Web Document Clustering
IEEE Transactions on Knowledge and Data Engineering
Extending the single words-based document model: a comparison of bigrams and 2-itemsets
Proceedings of the 2006 ACM symposium on Document engineering
Context-Based Term Frequency Assessment for Text Classification
PRICAI '08 Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
Hi-index | 0.00 |
In this work we investigate the usefulness of {\em $n$-grams} for document indexing in text categorization (TC). We call $n$-gram a set $t_k$ of $n$ word stems, and we say that $t_k$ occurs in a document $d_j$ when a sequence of words appears in $d_j$ that, after stop word removal and stemming, consists exactly of the $n$ stems in $t_k$, in some order. Previous researches have investigated the use of $n$-grams (or some variant of them) in the context of specific learning algorithms, and thus have not obtained general answers on their usefulness for TC. In this work we investigate the usefulness of $n$-grams in TC independently of any specific learning algorithm. We do so by applying feature selection to the pool of all $\alpha$-grams ($\alpha\leq n$), and checking how many $n$-grams score high enough to be selected in the top $\sigma$ $\alpha$-grams. We report the results of our experiments, using several feature selection functions and varying values of $\sigma$, performed on the {\sf Reuters-21578} standard TC benchmark. We also report results of making actual use of the selected $n$-grams in the context of a linear classifier induced by means of the Rocchio method.