Information retrieval on the web
ACM Computing Surveys (CSUR)
A framework to predict the quality of answers with non-textual features
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Finding high-quality content in social media
WSDM '08 Proceedings of the 2008 International Conference on Web Search and Data Mining
Predicting information seeker satisfaction in community question answering
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Modeling information-seeker satisfaction in community question answering
ACM Transactions on Knowledge Discovery from Data (TKDD)
Please help!: patterns of personalization in an online tech support board
Proceedings of the fourth international conference on Communities and technologies
Learning to recommend questions based on user ratings
Proceedings of the 18th ACM conference on Information and knowledge management
Supporting synchronous social q&a throughout the question lifecycle
Proceedings of the 20th international conference on World wide web
'Natural' search user interfaces
Communications of the ACM
Exploiting user profile information for answer ranking in cQA
Proceedings of the 21st international conference companion on World Wide Web
Automatic identification of best answers in online enquiry communities
ESWC'12 Proceedings of the 9th international conference on The Semantic Web: research and applications
Hi-index | 0.02 |
Question answering communities such as Yahoo! Answers have emerged as a popular alternative to general-purpose web search. By directly interacting with other participants, information seekers can obtain specific answers to their questions. However, user success in obtaining satisfactory answers varies greatly. We hypothesize that satisfaction with the contributed answers is largely determined by the asker's prior experience, expectations, and personal preferences. Hence, we begin to develop personalized models of asker satisfaction to predict whether a particular question author will be satisfied with the answers contributed by the community participants. We formalize this problem, and explore a variety of content, structure, and interaction features for this task using standard machine learning techniques. Our experimental evaluation over thousands of real questions indicates that indeed it is beneficial to personalize satisfaction predictions when sufficient prior user history exists, significantly improving accuracy over a "one-size-fits-all" prediction model.