IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Measuring Search Engine Quality
Information Retrieval
The Turn: Integration of Information Seeking and Retrieval in Context (The Information Retrieval Series)
Effective and efficient user interaction for long queries
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Score standardization for inter-collection comparison of retrieval systems
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Novelty and diversity in information retrieval evaluation
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Relevance thresholds in system evaluations
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
One-button search extracts wider interests: an empirical study with video bookmarking search
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Effective Keyword Search for Software Resources Installed in Large-Scale Grid Infrastructures
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 01
Empirical justification of the gain and discount function for nDCG
Proceedings of the 18th ACM conference on Information and knowledge management
Expected reciprocal rank for graded relevance
Proceedings of the 18th ACM conference on Information and knowledge management
Metric and Relevance Mismatch in Retrieval Evaluation
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
ACM Transactions on Information Systems (TOIS)
Studies on intrinsic summary evaluation
International Journal of Artificial Intelligence and Soft Computing
Sampling high-quality clicks from noisy click data
Proceedings of the 19th international conference on World wide web
Predicting searcher frustration
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Do user preferences and evaluation measures line up?
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Extending average precision to graded relevance judgments
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Web search solved?: all result rankings the same?
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
On the evaluation of entity profiles
CLEF'10 Proceedings of the 2010 international conference on Multilingual and multimodal information access evaluation: cross-language evaluation forum
Find it if you can: a game for modeling different types of web search success using interaction data
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Reranking search results for sparse queries
Proceedings of the 20th ACM international conference on Information and knowledge management
Minersoft: Software retrieval in grid and cloud computing infrastructures
ACM Transactions on Internet Technology (TOIT)
Recommending source code for use in rapid software prototypes
Proceedings of the 34th International Conference on Software Engineering
When big data leads to lost data
Proceedings of the 5th Ph.D. workshop on Information and knowledge
Models and metrics: IR evaluation as a user process
Proceedings of the Seventeenth Australasian Document Computing Symposium
Playing by the rules: mining query associations to predict search performance
Proceedings of the sixth ACM international conference on Web search and data mining
Toward self-correcting search engines: using underperforming queries to improve search
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Relevance dimensions in preference-based IR evaluation
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Users versus models: what observation tells us about effectiveness metrics
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Choices in batch information retrieval evaluation
Proceedings of the 18th Australasian Document Computing Symposium
Contextual and dimensional relevance judgments for reusable SERP-level evaluation
Proceedings of the 23rd international conference on World wide web
Evaluation in Music Information Retrieval
Journal of Intelligent Information Systems
Hi-index | 0.00 |
This paper presents an experimental study of users assessing the quality of Google web search results. In particular we look at how users' satisfaction correlates with the effectiveness of Google as quantified by IR measures such as precision and the suite of Cumulative Gain measures (CG, DCG, NDCG). Results indicate strong correlation between users' satisfaction, CG and precision, moderate correlation with DCG, with perhaps surprisingly negligible correlation with NDCG. The reasons for the low correlation with NDCG are examined.