An evaluation framework for software crowdsourcing

  • Authors:
  • Wenjun Wu;Wei-Tek Tsai;Wei Li

  • Affiliations:
  • State Key Laboratory of Software Development Environment, Beihang University, Beijing, China 100191;School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, USA AZ85281 and Department of Computer Science and Technology, INLIST, Tsinghua University, Bei ...;State Key Laboratory of Software Development Environment, Beihang University, Beijing, China 100191

  • Venue:
  • Frontiers of Computer Science: Selected Publications from Chinese Universities
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently software crowdsourcing has become an emerging area of software engineering. Few papers have presented a systematic analysis on the practices of software crowdsourcing. This paper first presents an evaluation framework to evaluate software crowdsourcing projects with respect to software quality, costs, diversity of solutions, and competition nature in crowdsourcing. Specifically, competitions are evaluated by the min-max relationship from game theory among participants where one party tries to minimize an objective function while the other party tries to maximize the same objective function. The paper then defines a game theory model to analyze the primary factors in these minmax competition rules that affect the nature of participation as well as the software quality. Finally, using the proposed evaluation framework, this paper illustrates two crowdsourcing processes, Harvard-TopCoder and AppStori. The framework demonstrates the sharp contrasts between both crowdsourcing processes as participants will have drastic behaviors in engaging these two projects.