Communications of the ACM - Supporting community and building social capital
Data Quality Requirements Analysis and Modeling
Proceedings of the Ninth International Conference on Data Engineering
Data Quality in Web Information Systems
ER '02 Proceedings of the 21st International Conference on Conceptual Modeling
Measuring article quality in wikipedia: models and evaluation
Proceedings of the sixteenth ACM conference on Conference on information and knowledge management
Computing trust from revision history
Proceedings of the 2006 International Conference on Privacy, Security and Trust: Bridge the Gap Between PST Technologies and Business Services
Methodologies for data quality assessment and improvement
ACM Computing Surveys (CSUR)
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
Hi-index | 0.01 |
Nowadays user-generated content (UGC) such as Wikipedia, is emerging on the web at an explosive rate, but its data quality varies dramatically. How to effectively rate the article's quality is the focus of research and industry communities. Considering that each quality class demonstrates its specific characteristics on different quality dimensions, we propose to learn the web quality corpus by taking different quality dimensions into consideration. Each article is regarded as an aggregation of sections and each section's quality is modelled using Dynamic Bayesian Network(DBN) with reference to accuracy, completeness and consistency. Each quality class is represented by three dimension corpora, namely accuracy corpus, completeness corpus and consistency corpus. Finally we propose two schemes to compute quality ranking. Experiments show our approach performs well.