How to filter out random clickers in a crowdsourcing-based study?

  • Authors:
  • Sung-Hee Kim;Hyokun Yun;Ji Soo Yi

  • Affiliations:
  • Purdue University;Purdue University;Purdue University

  • Venue:
  • Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Crowdsourcing-based user studies have become increasingly popular in information visualization (InfoVis) and visual analytics (VA). However, it is still unclear how to deal with some undesired crowdsourcing workers, especially those who submit random responses simply to gain wages (random clickers, henceforth). In order to mitigate the impacts of random clickers, several studies simply exclude outliers, but this approach has a potential risk of losing data from participants whose performances are extreme even though they participated faithfully. In this paper, we evaluated the degree of randomness in responses from a crowdsourcing worker to infer whether the worker is a random clicker. Thus, we could reliably filter out random clickers and found that resulting data from crowdsourcing-based user studies were comparable with those of a controlled lab study. We also tested three representative reward schemes (piece-rate, quota, and punishment schemes) with four different levels of compensations ($0.00, $0.20, $1.00, and $4.00) on a crowdsourcing platform with a total of 1,500 crowdsourcing workers to investigate the influences that different payment conditions have on the number of random clickers. The results show that higher compensations decrease the proportion of random clickers, but such increase in participation quality cannot justify the associated additional costs. A detailed discussion on how to optimize the payment scheme and amount to obtain high-quality data economically is provided.