Communications of the ACM
Evaluating implicit measures to improve web search
ACM Transactions on Information Systems (TOIS)
Investigating behavioral variability in web search
Proceedings of the 16th international conference on World Wide Web
Investigating the querying and browsing behavior of advanced search engine users
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Predicting user interests from contextual information
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Improving ad relevance in sponsored search
Proceedings of the third ACM international conference on Web search and data mining
Ready to buy or just browsing?: detecting web searcher goals from interaction data
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Personalize web search results with user's location
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
ViewSer: enabling large-scale remote user studies of web search examination and interaction
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
CrowdLogging: distributed, private, and anonymous search logging
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Enhanced results for web search
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Hi-index | 0.00 |
A major hurdle faced by many information retrieval researchers---especially in academia---is evaluating retrieval systems in the wild. Challenges include tapping into large user bases, collecting user behavior, and modifying a given retrieval system. We outline several options available to researchers to overcome these challenges along with their advantages and disadvantages. We then demonstrate how CrowdLogger, an open-source browser extension for Firefox and Google Chrome, can be used as an in situ evaluation platform.