The Turn: Integration of Information Seeking and Retrieval in Context (The Information Retrieval Series)
Queue - AI
TREC: Continuing information retrieval's tradition of experimentation
Communications of the ACM
Journal of the American Society for Information Science and Technology
Why is web search so hard... to evaluate?
Journal of Web Engineering
No bull, no spin: a comparison of tags with other forms of user metadata
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
Relevance criteria for e-commerce: a crowdsourcing-based experimental analysis
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
IR Evaluation without a Common Set of Topics
ICTIR '09 Proceedings of the 2nd International Conference on Theory of Information Retrieval: Advances in Information Retrieval Theory
Methods for Evaluating Interactive Information Retrieval Systems with Users
Foundations and Trends in Information Retrieval
A crowdsourceable QoE evaluation framework for multimedia content
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Clustering and exploring search results using timeline constructions
Proceedings of the 18th ACM conference on Information and knowledge management
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Crowdsourcing the assembly of concept hierarchies
Proceedings of the 10th annual joint conference on Digital libraries
Do user preferences and evaluation measures line up?
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Clustering dictionary definitions using Amazon Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Crowdsourcing document relevance assessment with Mechanical Turk
CSLDAMT '10 Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk
Aspects and analysis of patent test collections
PaIR '10 Proceedings of the 3rd international workshop on Patent information retrieval
Automated component-level evaluation: present and future
CLEF'10 Proceedings of the 2010 international conference on Multilingual and multimodal information access evaluation: cross-language evaluation forum
Human-assisted graph search: it's okay to ask questions
Proceedings of the VLDB Endowment
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
ViewSer: a tool for large-scale remote studies of web search result examination
CHI '11 Extended Abstracts on Human Factors in Computing Systems
In search of quality in crowdsourcing for search engine evaluation
ECIR'11 Proceedings of the 33rd European conference on Advances in information retrieval
Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
ViewSer: enabling large-scale remote user studies of web search examination and interaction
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Repeatable and reliable search system evaluation using crowdsourcing
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Pseudo test collections for learning web search ranking functions
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Cross-corpus relevance projection
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Overview of the INEX 2010 book track: scaling up the evaluation using crowdsourcing
INEX'10 Proceedings of the 9th international conference on Initiative for the evaluation of XML retrieval: comparative evaluation of focused retrieval
Worker types and personality traits in crowdsourcing relevance labels
Proceedings of the 20th ACM international conference on Information and knowledge management
Random partial paired comparison for subjective video quality assessment via hodgerank
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Do Mechanical Turks dream of square pie charts?
Proceedings of the 3rd BELIV'10 Workshop: BEyond time and errors: novel evaLuation methods for Information Visualization
A language modeling approach for temporal information needs
ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
Evaluation and user preference study on spatial diversity
ECIR'2010 Proceedings of the 32nd European conference on Advances in Information Retrieval
CrowdScreen: algorithms for filtering data with humans
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
So who won?: dynamic max discovery with the crowd
SIGMOD '12 Proceedings of the 2012 ACM SIGMOD International Conference on Management of Data
CDAS: a crowdsourcing data analytics system
Proceedings of the VLDB Endowment
Using preference judgments for novel document retrieval
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Quality through flow and immersion: gamifying crowdsourced relevance assessments
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Whom to ask?: jury selection for decision making tasks on micro-blog services
Proceedings of the VLDB Endowment
Using crowdsourcing for TREC relevance assessment
Information Processing and Management: an International Journal
Ground truth generation in medical imaging: a crowdsourcing-based iterative approach
Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia
An examination of content farms in web search using crowdsourcing
Proceedings of the 21st ACM international conference on Information and knowledge management
Proceedings of the 21st ACM international conference on Information and knowledge management
Bringing the algorithms to the data: cloud---based benchmarking for medical image analysis
CLEF'12 Proceedings of the Third international conference on Information Access Evaluation: multilinguality, multimodality, and visual analytics
GeoCrowd: enabling query answering with spatial crowdsourcing
Proceedings of the 20th International Conference on Advances in Geographic Information Systems
Towards web-scale structured web data extraction
Proceedings of the sixth ACM international conference on Web search and data mining
How to filter out random clickers in a crowdsourcing-based study?
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
Phrase detectives: Utilizing collective intelligence for internet-scale language resource creation
ACM Transactions on Interactive Intelligent Systems (TiiS) - Special section on internet-scale human problem solving and regular papers
Human-Computer interaction view on information retrieval evaluation
PROMISE'12 Proceedings of the 2012 international conference on Information Retrieval Meets Information Visualization
An introduction to crowdsourcing for language and multimedia technology research
PROMISE'12 Proceedings of the 2012 international conference on Information Retrieval Meets Information Visualization
Crowdsourcing for information retrieval: introduction to the special issue
Information Retrieval
Crowdsourcing and the crisis-affected community
Information Retrieval
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
Information Retrieval
Identifying top news using crowdsourcing
Information Retrieval
An online cost sensitive decision-making method in crowdsourcing systems
Proceedings of the 2013 ACM SIGMOD International Conference on Management of Data
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
News vertical search: when and what to display to users
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Evaluating the crowd with confidence
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Learning to rank query suggestions for adhoc and diversity search
Information Retrieval
GeoTruCrowd: trustworthy query answering with spatial crowdsourcing
Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
Repeatable and reliable semantic search evaluation
Web Semantics: Science, Services and Agents on the World Wide Web
The notion of diversity in graphical entity summarisation on semantic knowledge graphs
Journal of Intelligent Information Systems
Hi-index | 0.00 |
Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.