The sciences of the artificial (3rd ed.)
The sciences of the artificial (3rd ed.)
Designing trust into online experiences
Communications of the ACM
Studying cooperation and conflict between authors with history flow visualizations
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
How do users evaluate the credibility of Web sites?: a study with over 2,500 participants
Proceedings of the 2003 conference on Designing for user experiences
He says, she says: conflict and coordination in Wikipedia
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Information quality work organization in wikipedia
Journal of the American Society for Information Science and Technology
Harnessing the wisdom of crowds in wikipedia: quality through coordination
Proceedings of the 2008 ACM conference on Computer supported cooperative work
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Determinants of wikipedia quality: the roles of global and local contribution inequality
Proceedings of the 2010 ACM conference on Computer supported cooperative work
Trust in wikipedia: how users trust information from an unknown source
Proceedings of the 4th workshop on Information credibility
Who does what: Collaboration patterns in the wikipedia and their impact on article quality
ACM Transactions on Management Information Systems (TMIS)
Social transparency in networked information exchange: a theoretical framework
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Is this what you meant?: promoting listening on the web with reflect
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Teammate inaccuracy blindness: when information sharing tools hinder collaborative analysis
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Hi-index | 0.00 |
Large-scale collaboration systems often separate their content from the deliberation around how that content was produced. Surfacing this deliberation may engender trust in the content generation process if the deliberation process appears fair, well-reasoned, and thorough. Alternatively, it could encourage doubts about content quality, especially if the process appears messy or biased. In this paper we report the results of an experiment where we found that surfacing deliberation generally led to decreases in perceptions of quality for the article under consideration, especially - but not only - if the discussion revealed conflict. The effect size depends on the type of editors' interactions. Finally, this decrease in actual article quality rating was accompanied by self-reported improved perceptions of the article and Wikipedia overall.