Evaluation of evaluation in information retrieval
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
Information Processing and Management: an International Journal
Internet-scale collection of human-reviewed data
Proceedings of the 16th international conference on World Wide Web
Predictors of answer quality in online Q&A sites
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Predicting information seeker satisfaction in community question answering
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Supporting synchronous social q&a throughout the question lifecycle
Proceedings of the 20th international conference on World wide web
Predicting the perceived quality of online mathematics contributions from users' reputations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
How do programmers ask and answer questions on the web? (NIER track)
Proceedings of the 33rd International Conference on Software Engineering
Predicting web searcher satisfaction with existing community-based answers
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Content-driven trust propagation framework
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
I want to answer; who has a question?: Yahoo! answers recommender system
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Using social media to study the diversity of example usage among professional developers
Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering
A meaningful model for computing users' importance scores in Q&A systems
Proceedings of the Second Symposium on Information and Communication Technology
Exploiting user profile information for answer ranking in cQA
Proceedings of the 21st international conference companion on World Wide Web
Analyzing and predicting question quality in community question answering services
Proceedings of the 21st international conference companion on World Wide Web
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Social Q&A and virtual reference—comparing apples and oranges with the help of experts and users
Journal of the American Society for Information Science and Technology
Community answer summarization for multi-sentence question with group L1 regularization
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Assessing the quality of textual features in social media
Information Processing and Management: an International Journal
Have you done anything like that?: predicting performance using inter-category reputation
Proceedings of the sixth ACM international conference on Web search and data mining
Analyzing the quality of information solicited from targeted strangers on social media
Proceedings of the 2013 conference on Computer supported cooperative work
Exploiting user feedback to learn to rank answers in q&a forums: a case study with stack overflow
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Wisdom in the social crowd: an analysis of quora
Proceedings of the 22nd international conference on World Wide Web
Community question topic categorization via hierarchical kernelized classification
Proceedings of the 22nd ACM international conference on Conference on information & knowledge management
Fit or unfit: analysis and prediction of 'closed questions' on stack overflow
Proceedings of the first ACM conference on Online social networks
Trust, but verify: predicting contribution quality for knowledge base construction and curation
Proceedings of the 7th ACM international conference on Web search and data mining
Information Sciences: an International Journal
Chaff from the wheat: characterization and modeling of deleted questions on stack overflow
Proceedings of the 23rd international conference on World wide web
Hi-index | 0.00 |
Question answering (QA) helps one go beyond traditional keywords-based querying and retrieve information in more precise form than given by a document or a list of documents. Several community-based QA (CQA) services have emerged allowing information seekers pose their information need as questions and receive answers from their fellow users. A question may receive multiple answers from multiple users and the asker or the community can choose the best answer. While the asker can thus indicate if he was satisfied with the information he received, there is no clear way of evaluating the quality of that information. We present a study to evaluate and predict the quality of an answer in a CQA setting. We chose Yahoo! Answers as such CQA service and selected a small set of questions, each with at least five answers. We asked Amazon Mechanical Turk workers to rate the quality of each answer for a given question based on 13 different criteria. Each answer was rated by five different workers. We then matched their assessments with the actual asker's rating of a given answer. We show that the quality criteria we used faithfully match with asker's perception of a quality answer. We furthered our investigation by extracting various features from questions, answers, and the users who posted them, and training a number of classifiers to select the best answer using those features. We demonstrate a high predictability of our trained models along with the relative merits of each of the features for such prediction. These models support our argument that in case of CQA, contextual information such as a user's profile, can be critical in evaluating and predicting content quality.