A language modeling approach to information retrieval
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
A language modeling approach to information retrieval
A language modeling approach to information retrieval
Document language models, query models, and risk minimization for information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Relevance based language models
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
A study of smoothing methods for language models applied to Ad Hoc information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Model-based feedback in the language modeling approach to information retrieval
Proceedings of the tenth international conference on Information and knowledge management
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Hi-index | 0.00 |
Recently, researchers have tried to extend a language modeling approach to apply relevance feedback. Their approaches can be classified into two categories. One typical approach is the expansion-based feedback that sequentially performs ‘term selection’ and ‘term re-weighting’ separately. Another approach is the model-based feedback that focuses on estimating ‘query language model’, which predicts well users’ information need. This paper improves these two approaches of relevance feedback by using a maximum a posteriori probability criterion, and a three-component mixture model. A maximum a posteriori probability criterion is a criterion for selection of good expansion terms from feedback documents. A three-component mixture model is the method that eliminates the noise of the query language model by adding a ‘document specific topic model’. The experimental results show that our methods increase the precision of relevance feedback for a short length query. In addition, we make some comparative study between several relevance feedbacks in three document collections.