Micro perceptual human computation for visual tasks

  • Authors:
  • Yotam Gingold;Ariel Shamir;Daniel Cohen-Or

  • Affiliations:
  • Tel-Aviv University and The Interdisciplinary Center, Herzliya, Israel;The Interdisciplinary Center, Herzliya, Israel;Tel-Aviv University, Tel-Aviv, Israel

  • Venue:
  • ACM Transactions on Graphics (TOG)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Human Computation (HC) utilizes humans to solve problems or carry out tasks that are hard for pure computational algorithms. Many graphics and vision problems have such tasks. Previous HC approaches mainly focus on generating data in batch, to gather benchmarks, or perform surveys demanding nontrivial interactions. We advocate a tighter integration of human computation into online, interactive algorithms. We aim to distill the differences between humans and computers and maximize the advantages of both in one algorithm. Our key idea is to decompose such a problem into a massive number of very simple, carefully designed, human micro-tasks that are based on perception, and whose answers can be combined algorithmically to solve the original problem. Our approach is inspired by previous work on micro-tasks and perception experiments. We present three specific examples for the design of micro perceptual human computation algorithms to extract depth layers and image normals from a single photograph, and to augment an image with high-level semantic information such as symmetry.