Query model estimations for relevance feedback in language modeling approach

  • Authors:
  • Seung-Hoon Na;In-Su Kang;Kyonghi Moon;Jong-Hyeok Lee

  • Affiliations:
  • Division of Electrical and Computer Engineering, POSTECH, AITrc, Republic of Korea;Division of Electrical and Computer Engineering, POSTECH, AITrc, Republic of Korea;Div. of Computer and Information Engineering, Silla University, Republic of Korea;Division of Electrical and Computer Engineering, POSTECH, AITrc, Republic of Korea

  • Venue:
  • AIRS'04 Proceedings of the 2004 international conference on Asian Information Retrieval Technology
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, researchers have successfully augmented the language modeling approach with a well-founded framework in order to incorporate relevance feedback. A critical problem in this framework is to estimate a query language model that encodes detailed knowledge about a user's information need. This paper explores several methods for query model estimation, motivated by Zhai's generative model. The generative model is an estimation method that maximizes the generative likelihood of feedback documents according to the estimated query language model. Focusing on some limitations of the original generative model, we propose several estimation methods to resolve these limitations: 1) three-component mixture model, 2) re-sampling feedback documents with document language models, and 3) sampling a relevance document from a relevance document language model. In addition, several hybrid methods are also examined, which combine the query specific smoothing method and the estimated query language model. In experiments, our estimation methods outperform a simple generative model, showing a significant improvement over an initial retrieval.