Don't look stupid: avoiding pitfalls when recommending research papers
CSCW '06 Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work
Can social information retrieval enhance the discovery and reuse of digital educational content?
Proceedings of the 2007 ACM conference on Recommender systems
Recommending scientific articles using citeulike
Proceedings of the 2008 ACM conference on Recommender systems
Recommender systems: from algorithms to user experience
User Modeling and User-Adapted Interaction
Effects of behavior monitoring and perceived system benefit in online recommender systems
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
TiNYARM: Awareness of Research Papers in a Community of Practice
Proceedings of the 13th International Conference on Knowledge Management and Knowledge Technologies
Hi-index | 0.00 |
In order to build relevant, useful, and effective recommender systems, researchers need to understand why users come to these systems and how users judge recommendation lists. Today, researchers use accuracy-based metrics for judging goodness. Yet these metrics cannot capture users' criteria for judging recommendation usefulness. We need to rethink recommenders from a user's perspective: they help users find new information. Thus, not only do we need to know about the user, we need to know what the user is looking for. In this dissertation, we explore how to tailor recommendation lists not just to a user, but to the user's current information seeking task. We argue that each recommender algorithm has specific strengths and weaknesses, different from other algorithms. Thus, different recommender algorithms are better suited for specific users and their information seeking tasks. A recommender system should, then, select and tune the appropriate recommender algorithm (or algorithms) for a given user/information seeking task combination. To support this, we present results in three areas. First, we apply recommender systems in the domain of peer-reviewed computer science research papers, a domain where users have external criteria for selecting items to consume. The effectiveness of our approach is validated through several sets of experiments. Second, we argue that current recommender systems research in not focused on user needs, but rather on algorithm design and performance. To bring users back into focus, we reflect on how users perceive recommenders and the recommendation process, and present Human-Recommender Interaction theory, a framework and language for describing recommenders and the recommendation lists they generate. Third, we look to different ways of evaluating recommender systems algorithms. To this end, we propose a new set of recommender metrics, run experiments on several recommender algorithms using these metrics, and categorize the differences we discovered. Through Human-Recommender Interaction and these new metrics, we can bridge users and their needs with recommender algorithms to generate more useful recommendation lists.