Computers and Biomedical Research
Natural language vs. Boolean query evaluation: a comparison of retrieval performance
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
On-line new event detection and tracking
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Interactive query expansion: a user-based evaluation in a relevance feedback environment
Journal of the American Society for Information Science
Information Processing and Management: an International Journal - Special issue on interactivity at the text retrieval conference (TREC)
A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
Using terminological feedback for web search refinement: a log-based study
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Re-examining the potential effectiveness of interactive query expansion
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Retrieval evaluation with incomplete information
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
A competitive environment for exploratory query expansion
Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries
Search war: a game for improving web search
Proceedings of the ACM SIGKDD Workshop on Human Computation
Thumbs-Up: a game for playing to rank search results
Proceedings of the ACM SIGKDD Workshop on Human Computation
CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones
Proceedings of the 8th international conference on Mobile systems, applications, and services
Crowdsourcing for search evaluation
ACM SIGIR Forum
Find it if you can: a game for modeling different types of web search success using interaction data
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Crowdsourcing for information retrieval: principles, methods, and applications
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Crowdsourcing for information retrieval
ACM SIGIR Forum
Answering search queries with CrowdSearcher
Proceedings of the 21st international conference on World Wide Web
Using crowdsourcing for TREC relevance assessment
Information Processing and Management: an International Journal
Hi-index | 0.00 |
Human computation techniques have demonstrated their ability to accomplish portions of tasks that machine-based techniques find difficult. Query refinement is a task that may benefit from human involvement. We conduct an experiment that evaluates the contributions of two user types: student participants and crowdworkers hired from an online labor market. Human participants are assigned to use one of two query interfaces: a traditional web-based interface or a game-based interface. We ask each group to manually construct queries to respond to TREC information needs and calculate their resulting recall and precision. Traditional web interface users are provided feedback on their initial queries and asked to use this information to reformulate their original queries. Game interface users are provided with instant scoring and ask to refine their queries based on their scores. We measure the resulting feedback-based improvement on each group and compare the results from human computation techniques to machine-based algorithms.