Crowdsourcing and knowledge sharing: strategic user behavior on taskcn
Proceedings of the 9th ACM conference on Electronic commerce
Who are the crowdworkers?: shifting demographics in mechanical turk
CHI '10 Extended Abstracts on Human Factors in Computing Systems
Financial incentives and the "performance of crowds"
ACM SIGKDD Explorations Newsletter
Quality management on Amazon Mechanical Turk
Proceedings of the ACM SIGKDD Workshop on Human Computation
Analyzing the Amazon Mechanical Turk marketplace
XRDS: Crossroads, The ACM Magazine for Students - Comp-YOU-Ter
The motivations and experiences of the on-demand mobile workforce
Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing
Maximum Complex Task Assignment: Towards Tasks Correlation in Spatial Crowdsourcing
Proceedings of International Conference on Information Integration and Web-based Applications & Services
QuiltView: a crowd-sourced video response system
Proceedings of the 15th Workshop on Mobile Computing Systems and Applications
Hi-index | 0.01 |
The ubiquity of smartphones has led to the emergence of mobile crowdsourcing markets, where smartphone users participate to perform tasks in the physical world. Mobile crowdsourcing markets are uniquely different from their online counterparts in that they require spatial mobility, and are therefore impacted by geographic factors and constraints that are not present in the online case. Despite the emergence and importance of such mobile marketplaces, little to none is known about the labor dynamics and mobility patterns of agents. This paper provides an in-depth exploration of labor dynamics in mobile task markets based on a year-long dataset from a leading mobile crowdsourcing platform. We find that a small core group of workers ( 80%) generated in the market. We find that these super agents are more efficient than other agents across several dimensions: a) they are willing to move longer distances to perform tasks, yet they amortize travel across more tasks, b) they work and search for tasks more efficiently, c) they have higher data quality in terms of accepted submissions, and d) they improve in almost all of these efficiency measures over time. We find that super agent efficiency stems from two simple optimizations --- they are 3x more likely than other agents to chain tasks and they pick fewer lower priced tasks than other agents. We compare mobile and online micro-task markets, and discuss differences in demographics, data quality, and time of use, as well as similarities in super agent behavior. We conclude with a discussion of how a mobile micro-task market might leverage some of our results to improve performance.