The significance of the Cranfield tests on index languages
SIGIR '91 Proceedings of the 14th annual international ACM SIGIR conference on Research and development in information retrieval
The probability ranking principle in IR
Readings in information retrieval
Rank aggregation methods for the Web
Proceedings of the 10th international conference on World Wide Web
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Labeling images with a computer game
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Preference learning with Gaussian processes
ICML '05 Proceedings of the 22nd international conference on Machine learning
Peekaboom: a game for locating objects in images
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Bayesian pattern ranking for move prediction in the game of Go
ICML '06 Proceedings of the 23rd international conference on Machine learning
Learning diverse rankings with multi-armed bandits
Proceedings of the 25th international conference on Machine learning
Minimally invasive randomization for collecting unbiased preferences from clickthrough logs
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Journal of Artificial Intelligence Research
Here or there: preference judgments for relevance
ECIR'08 Proceedings of the IR research, 30th European conference on Advances in information retrieval
Picture this: preferences for image search
Proceedings of the ACM SIGKDD Workshop on Human Computation
Thumbs-Up: a game for playing to rank search results
Proceedings of the ACM SIGKDD Workshop on Human Computation
Improving search engines using human computation games
Proceedings of the 18th ACM conference on Information and knowledge management
People's Web '09 Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources
Paraphrasing invariance coefficient: measuring para-query invariance of search engines
Proceedings of the 3rd International Semantic Search Workshop
Hi-index | 0.00 |
We consider the problem of identifying the consensus ranking for the results of a query, given preferences among those results from a set of individual users. Once consensus rankings are identified for a set of queries, these rankings can serve for both evaluation and training of retrieval and learning systems. We present a novel approach to collecting the individual user preferences over image-search results: we use a collaborative game in which players are rewarded for agreeing on which image result is best for a query. Our approach is distinct from other labeling games because we are able to elicit directly the preferences of interest with respect to image queries extracted from query logs. As a source of relevance judgments, this data provides a useful complement to click data. Furthermore, the data is free of positional biases and is collected by the game without the risk of frustrating users with non-relevant results; this risk is prevalent in standard mechanisms for debiasing clicks. We describe data collected over 34 days from a deployed version of this game that amounts to about 18 million expressed preferences between pairs. Finally, we present several approaches to modeling this data in order to extract the consensus rankings from the preferences and better sort the search results for targeted queries.