NewsCube: delivering multiple aspects of news to mitigate media bias
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Videolyzer: quality analysis of online informational video for bloggers and journalists
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Meme-tracking and the dynamics of the news cycle
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Presenting diverse political opinions: how and how much
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Highlighting disputed claims on the web
Proceedings of the 19th international conference on World wide web
Proceedings of the 4th workshop on Information credibility
Statement map: reducing web information credibility noise through opinion classification
AND '10 Proceedings of the fourth workshop on Analytics for noisy unstructured text data
Truthy: mapping the spread of astroturf in microblog streams
Proceedings of the 20th international conference companion on World wide web
ConsiderIt: improving structured public deliberation
CHI '11 Extended Abstracts on Human Factors in Computing Systems
Finding deceptive opinion spam by any stretch of the imagination
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Finding and assessing social media information sources in the context of journalism
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Integrating on-demand fact-checking with public dialogue
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Hi-index | 0.00 |
Computer scientists have responded to the high prevalence of inaccurate political information online by creating systems that identify and flag false claims. Warning users of inaccurate information as it is displayed has obvious appeal, but it also poses risk. Compared to post-exposure corrections, real-time corrections may cause users to be more resistant to factual information. This paper presents an experiment comparing the effects of real-time corrections to corrections that are presented after a short distractor task. Although real-time corrections are modestly more effective than delayed corrections overall, closer inspection reveals that this is only true among individuals predisposed to reject the false claim. In contrast, individuals whose attitudes are supported by the inaccurate information distrust the source more when corrections are presented in real time, yielding beliefs comparable to those never exposed to a correction. We find no evidence of real-time corrections encouraging counterargument. Strategies for reducing these biases are discussed.