Evaluation of evaluation in information retrieval
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
ACM SIGIR Forum
Automatic identification of user goals in Web search
WWW '05 Proceedings of the 14th international conference on World Wide Web
Automatic query type identification based on click through information
AIRS'06 Proceedings of the Third Asia conference on Information Retrieval Technology
Can chinese web pages be classified with english data source?
Proceedings of the 17th international conference on World Wide Web
How does clickthrough data reflect retrieval quality?
Proceedings of the 17th ACM conference on Information and knowledge management
Report on the TrebleCLEF query log analysis workshop 2009
ACM SIGIR Forum
Exploiting bilingual information to improve web search
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
Empirical Study on Rare Query Characteristics
WI-IAT '11 Proceedings of the 2011 IEEE/WIC/ACM International Conferences on Web Intelligence and Intelligent Agent Technology - Volume 01
Large-scale validation and analysis of interleaved search evaluation
ACM Transactions on Information Systems (TOIS)
Hi-index | 0.00 |
Performance evaluation is an important issue in Web search engine researches. Traditional evaluation methods rely on much human efforts and are therefore quite time-consuming. With click-through data analysis, we proposed an automatic search engine performance evaluation method. This method generates navigational type query topics and answers automatically based on search users. querying and clicking behavior. Experimental results based on a commercial Chinese search engine's user logs show that the automatically method gets a similar evaluation result with traditional assessor-based ones.