On the use of learning object metadata: the GLOBE experience
EC-TEL'11 Proceedings of the 6th European conference on Technology enhanced learning: towards ubiquitous learning
Learnometrics: metrics for learning objects
Proceedings of the 1st International Conference on Learning Analytics and Knowledge
Information Systems and e-Business Management
Assessing the quality of large-scale data standards: A case of XBRL GAAP Taxonomy
Decision Support Systems
Hi-index | 0.00 |
Owing to the recent developments in automatic metadata generation and interoperability between digital repositories, the production of metadata is now vastly surpassing manual quality control capabilities. Abandoning quality control altogether is problematic, because low-quality metadata compromise the effectiveness of services that repositories provide to their users. To address this problem, we present a set of scalable quality metrics for metadata based on the Bruce & Hillman framework for metadata quality control. We perform three experiments to evaluate our metrics: (1) the degree of correlation between the metrics and manual quality reviews, (2) the discriminatory power between metadata sets and (3) the usefulness of the metrics as low-quality filters. Through statistical analysis, we found that several metrics, especially Text Information Content, correlate well with human evaluation and that the average of all the metrics are roughly as effective as people to flag low-quality instances. The implications of this finding are discussed. Finally, we propose possible applications of the metrics to improve tools for the administration of digital repositories.