Statistical profiles of highly-rated web sites
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
IEEE Internet Computing
Evaluation of Hypermedia Educational Systems: Criteria and Imperfect Measures
ICCE '02 Proceedings of the International Conference on Computers in Education
Evaluating collaborative filtering recommender systems
ACM Transactions on Information Systems (TOIS)
Automatic classification of didactic functions of e-learning resources
Proceedings of the 15th international conference on Multimedia
Complete metadata records in learning object repositories: some evidence and requirements
International Journal of Learning Technology
Size matters: word count as a measure of quality on wikipedia
Proceedings of the 17th international conference on World Wide Web
Relevance Ranking Metrics for Learning Objects
IEEE Transactions on Learning Technologies
Automatically characterizing resource quality for educational digital libraries
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
Learning objects in theory and practice: A vision from Mexican University teachers
Computers & Education
Quantitative Analysis of Learning Object Repositories
IEEE Transactions on Learning Technologies
INCOS '09 Proceedings of the 2009 International Conference on Intelligent Networking and Collaborative Systems
Evaluating collaborative filtering recommendations inside large learning object repositories
Information Processing and Management: an International Journal
The usage of open educational resources in MAOR repository
International Journal of Technology Enhanced Learning
Hi-index | 0.00 |
The continuously growth of learning resources available in on-line repositories has raised the concern for the development of automated methods for quality assessment. The current existence of on-line evaluations in such repositories has opened the possibility of searching for statistical profiles of highly-rated resources that can be used as priori indicators of quality. In this paper, we analyzed 35 metrics in learning objects refereed inside the MERLOT repository and elaborated profiles for these resources regarding the different categories of disciplines and material types available. We found that some of the intrinsic metrics presented significant differences between highly rated and poorly-rated resources and that those differences are dependent on the category of discipline to which the resource belongs and on the type of the resource. Moreover, we found that different profiles should be identified according to the type of rating (peer-review or user) under evaluation. At last, we developed an initial model using linear discriminant analysis to evaluate the strength of relevant metrics when performing an automated quality classification task. The initial results of this work are promising and will be used as the foundations for the further development of an automated tool for contextualized quality assessment of learning objects inside repositories.