Refactoring: improving the design of existing code
Refactoring: improving the design of existing code
Estimating the Numbers of End Users and End User Programmers
VLHCC '05 Proceedings of the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing
Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Towards a model of understanding social search
Proceedings of the 2008 ACM conference on Computer supported cooperative work
Conversations in developer communities: a preliminary analysis of the yahoo! pipes community
Proceedings of the fourth international conference on Communities and technologies
TurKit: tools for iterative tasks on mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Financial incentives and the "performance of crowds"
Proceedings of the ACM SIGKDD Workshop on Human Computation
Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Crowdsourcing graphical perception: using mechanical turk to assess visualization design
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Are your participants gaming the system?: screening mechanical turk workers
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Finding suitable programs: semantic search with incomplete and lightweight specifications
Proceedings of the 34th International Conference on Software Engineering
Reducing the barriers to writing verified specifications
Proceedings of the ACM international conference on Object oriented programming systems languages and applications
Tagging tagged images: on the impact of existing annotations on image tagging
Proceedings of the ACM multimedia 2012 workshop on Crowdsourcing for multimedia
How to filter out random clickers in a crowdsourcing-based study?
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
Hi-index | 0.00 |
The power and the generality of the findings obtained through empirical studies are bounded by the number and type of participating subjects. In software engineering, obtaining a large number of adequate subjects to evaluate a technique or tool is often a major challenge. In this work we explore the use of crowdsourcing as a mechanism to address that challenge by assisting in subject recruitment. More specifically, through this work we show how we adapted a study to be performed under an infrastructure that not only makes it possible to reach a large base of users but it also provides capabilities to manage those users as the study is being conducted. We discuss the lessons we learned through this experience, which illustrate the potential and tradeoffs of crowdsourcing software engineering studies.