Recommender systems in e-commerce
Proceedings of the 1st ACM conference on Electronic commerce
Is seeing believing?: how recommender system interfaces affect users' opinions
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Evaluating collaborative filtering recommender systems
ACM Transactions on Information Systems (TOIS)
Shilling recommender systems for fun and profit
Proceedings of the 13th international conference on World Wide Web
EC '06 Proceedings of the 7th ACM conference on Electronic commerce
Understanding user behavior in online feedback reporting
Proceedings of the 8th ACM conference on Electronic commerce
Case amazon: ratings and reviews as part of recommendations
Proceedings of the 2007 ACM conference on Recommender systems
Harnessing Learner's Collective Intelligence: A Web2.0 Approach to E-Learning
ITS '08 Proceedings of the 9th international conference on Intelligent Tutoring Systems
Overcoming the J-shaped distribution of product reviews
Communications of the ACM - A View of Parallel Computing
The impact of rating scales on user's rating behavior
UMAP'11 Proceedings of the 19th international conference on User modeling, adaption, and personalization
Synthesis of collective tag-based opinions in the socialweb
AI*IA'11 Proceedings of the 12th international conference on Artificial intelligence around man and beyond
Proceedings of the fifth ACM conference on Recommender systems
Recommender systems: from algorithms to user experience
User Modeling and User-Adapted Interaction
Case study: recommending course reading materials in a small virtual learning community
International Journal of Web Based Communities
Hi-index | 0.00 |
As various recommender approaches are increasingly considered in e-learning, the need for actual use cases to guide development efforts is growing. We report on our experiences of using non-algorithmic recommender features to recommend additional study materials on an undergraduate course in 2009--2011. The study data comes from student e-questionnaire replies and actual click-by-click use data. Our discussion centres on using binary (useful/not useful) rating scale (2009--2010) vis-à-vis five-star rating scale (2011). Using five-star scale to increase the complexity of the rating decision significantly reduced dishonesty (rating items without viewing them), but at the price of fewer ratings overall and increased complexity of interpreting the ratings. In addition to explaining how ratings and other factors inter-influenced item-selecting, we also discuss how different scales (binary and five-star) affect the rating behaviour in e-learning and how the five-star rating distributions in e-learning relate to those in other domains. Furthermore, we discuss two models, high-quality approach and low-cost approach, of employing non-algorithmic recommending features in e-learning that emerge from our findings. The findings provide the field with insight into the actual dynamics of using recommender features in e-learning. Moreover, they provide practitioners with actionable information on dishonesty.