The cost structure of sensemaking
INTERCHI '93 Proceedings of the INTERCHI '93 conference on Human factors in computing systems
The dynamics of mass interaction
CSCW '98 Proceedings of the 1998 ACM conference on Computer supported cooperative work
Deixis and the future of visualization excellence
VIS '91 Proceedings of the 2nd conference on Visualization '91
ManyEyes: a Site for Visualization at Internet Scale
IEEE Transactions on Visualization and Computer Graphics
Harry Potter and the Meat-Filled Freezer: A Case Study of Spontaneous Usage of Visualization Tools
HICSS '08 Proceedings of the Proceedings of the 41st Annual Hawaii International Conference on System Sciences
Your place or mine?: visualization as a community component
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Design considerations for collaborative visual analytics
Information Visualization - Special issue on visual analytics science and technology
Voyagers and voyeurs: Supporting asynchronous collaborative visualization
Communications of the ACM - Rural engineering development
Pathfinder: an online collaboration environment for citizen scientists
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
TurKit: tools for iterative tasks on mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Crowdsourcing graphical perception: using mechanical turk to assess visualization design
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Financial incentives and the "performance of crowds"
ACM SIGKDD Explorations Newsletter
AutoVis: automatic visualization
Information Visualization
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Perceptual Guidelines for Creating Rectangular Treemaps
IEEE Transactions on Visualization and Computer Graphics
Designing incentives for inexpert human raters
Proceedings of the ACM 2011 conference on Computer supported cooperative work
Human computation: a survey and taxonomy of a growing field
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
CommentSpace: structured support for collaborative visual analysis
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
CrowdForge: crowdsourcing complex work
CHI '11 Extended Abstracts on Human Factors in Computing Systems
The jabberwocky programming environment for structured social computing
Proceedings of the 24th annual ACM symposium on User interface software and technology
Collaboratively crowdsourcing workflows with turkomatic
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
How to filter out random clickers in a crowdsourcing-based study?
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
An introduction to crowdsourcing for language and multimedia technology research
PROMISE'12 Proceedings of the 2012 international conference on Information Retrieval Meets Information Visualization
interactions
Proceedings of the 22nd international conference on World Wide Web
Voyant: generating structured feedback on visual designs using a crowd of non-experts
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Crowd synthesis: extracting categories and clusters from complex data
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Scorpion: explaining away outliers in aggregate queries
Proceedings of the VLDB Endowment
Hi-index | 0.01 |
Web-based social data analysis tools that rely on public discussion to produce hypotheses or explanations of the patterns and trends in data, rarely yield high-quality results in practice. Crowdsourcing offers an alternative approach in which an analyst pays workers to generate such explanations. Yet, asking workers with varying skills, backgrounds and motivations to simply "Explain why a chart is interesting" can result in irrelevant, unclear or speculative explanations of variable quality. To address these problems, we contribute seven strategies for improving the quality and diversity of worker-generated explanations. Our experiments show that using (S1) feature-oriented prompts, providing (S2) good examples, and including (S3) reference gathering, (S4) chart reading, and (S5) annotation subtasks increases the quality of responses by 28% for US workers and 196% for non-US workers. Feature-oriented prompts improve explanation quality by 69% to 236% depending on the prompt. We also show that (S6) pre-annotating charts can focus workers' attention on relevant details, and demonstrate that (S7) generating explanations iteratively increases explanation diversity without increasing worker attrition. We used our techniques to generate 910 explanations for 16 datasets, and found that 63% were of high quality. These results demonstrate that paid crowd workers can reliably generate diverse, high-quality explanations that support the analysis of specific datasets.