Parallel coordinates: a tool for visualizing multi-dimensional geometry
VIS '90 Proceedings of the 1st conference on Visualization '90
Workflow mining: a survey of issues and approaches
Data & Knowledge Engineering
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Fast, cheap, and creative: evaluating translation quality using Amazon's Mechanical Turk
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Are your participants gaming the system?: screening mechanical turk workers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Soylent: a word processor with a crowd inside
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
Instrumenting the crowd: using implicit behavioral measures to predict task performance
Proceedings of the 24th annual ACM symposium on User interface software and technology
CrowdForge: crowdsourcing complex work
Proceedings of the 24th annual ACM symposium on User interface software and technology
The jabberwocky programming environment for structured social computing
Proceedings of the 24th annual ACM symposium on User interface software and technology
IEEE Transactions on Visualization and Computer Graphics
Collaboratively crowdsourcing workflows with turkomatic
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Shepherding the crowd yields better work
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
CrowdWeaver: visually managing complex crowd work
Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work
Proceedings of the 2013 conference on Computer supported cooperative work
Hi-index | 0.00 |
Crowdsourcing has become a powerful paradigm for accomplishing work quickly and at scale, but involves significant challenges in quality control. Researchers have developed algorithmic quality control approaches based on either worker outputs (such as gold standards or worker agreement) or worker behavior (such as task fingerprinting), but each approach has serious limitations, especially for complex or creative work. Human evaluation addresses these limitations but does not scale well with increasing numbers of workers. We present CrowdScape, a system that supports the human evaluation of complex crowd work through interactive visualization and mixed initiative machine learning. The system combines information about worker behavior with worker outputs, helping users to better understand and harness the crowd. We describe the system and discuss its utility through grounded case studies. We explore other contexts where CrowdScape's visualizations might be useful, such as in user studies.