Automated information retrieval: theory and methods
Automated information retrieval: theory and methods
Searching the Web: the public and their queries
Journal of the American Society for Information Science and Technology
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
ACM SIGIR Forum
Query type classification for web document retrieval
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Understanding user goals in web search
Proceedings of the 13th international conference on World Wide Web
Automatic identification of user goals in Web search
WWW '05 Proceedings of the 14th international conference on World Wide Web
Exploratory search: from finding to understanding
Communications of the ACM - Supporting exploratory search
Detecting online commercial intention (OCI)
Proceedings of the 15th international conference on World Wide Web
Web Search: Public Searching of the Web (Information Science and Knowledge Management)
Web Search: Public Searching of the Web (Information Science and Knowledge Management)
Defining a session on Web search engines: Research Articles
Journal of the American Society for Information Science and Technology
How well does result relevance predict session satisfaction?
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
User rankings of search engine results
Journal of the American Society for Information Science and Technology
An experimental comparison of click position-bias models
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Determining the informational, navigational, and transactional intent of Web queries
Information Processing and Management: an International Journal
Are click-through data adequate for learning web search rankings?
Proceedings of the 17th ACM conference on Information and knowledge management
Standard parameters for searching behaviour in search engines and their empirical evaluation
Journal of Information Science
Efficient multiple-click models in web search
Proceedings of the Second ACM International Conference on Web Search and Data Mining
Understanding the intent behind mobile information needs
Proceedings of the 14th international conference on Intelligent user interfaces
Usefulness of quality click-through data for training
Proceedings of the 2009 workshop on Web Search Click Data
A Web Search Analysis Considering the Intention behind Queries
LA-WEB '08 Proceedings of the 2008 Latin American Web Conference
What users see - Structures in search engine results pages
Information Sciences: an International Journal
Predicting user interests from contextual information
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Term-based commercial intent analysis
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Identifying the Intent of a User Query Using Support Vector Machines
SPIRE '09 Proceedings of the 16th International Symposium on String Processing and Information Retrieval
Using word-sense disambiguation methods to classify web queries by intent
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3
Search-logger analyzing exploratory search tasks
Proceedings of the 2011 ACM Symposium on Applied Computing
Transactional query identification in web search
AIRS'05 Proceedings of the Second Asia conference on Asia Information Retrieval Technology
The intention behind web queries
SPIRE'06 Proceedings of the 13th international conference on String Processing and Information Retrieval
Hi-index | 0.00 |
The purpose of this article is to test the reliability of query intents derived from queries, either by the user who entered the query or by another juror. We report the findings of three studies. First, we conducted a large-scale classification study (~50,000 queries) using a crowdsourcing approach. Next, we used clickthrough data from a search engine log and validated the judgments given by the jurors from the crowdsourcing study. Finally, we conducted an online survey on a commercial search engine's portal. Because we used the same queries for all three studies, we also were able to compare the results and the effectiveness of the different approaches. We found that neither the crowdsourcing approach, using jurors who classified queries originating from other users, nor the questionnaire approach, using searchers who were asked about their own query that they just entered into a Web search engine, led to satisfying results. This leads us to conclude that there was little understanding of the classification tasks, even though both groups of jurors were given detailed instructions. Although we used manual classification, our research also has important implications for automatic classification. We must question the success of approaches using automatic classification and comparing its performance to a baseline from human jurors. © 2012 Wiley Periodicals, Inc.