Automatically characterizing places with opportunistic crowdsensing using smartphones

  • Authors:
  • Yohan Chon;Nicholas D. Lane;Fan Li;Hojung Cha;Feng Zhao

  • Affiliations:
  • Yonsei University, Seoul, Korea;Microsoft Research Asia, Beijing, China;Microsoft Research Asia, Beijing, China;Yonsei University, Seoul, Korea;Microsoft Research Asia, Beijing, China

  • Venue:
  • Proceedings of the 2012 ACM Conference on Ubiquitous Computing
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Automated and scalable approaches for understanding the semantics of places are critical to improving both existing and emerging mobile services. In this paper, we present CrowdSense@Place (CSP), a framework that exploits a previously untapped resource -- opportunistically captured images and audio clips from smartphones -- to link place visits with place categories (e.g., store, restaurant). CSP combines signals based on location and user trajectories (using WiFi/GPS) along with various visual and audio place "hints" mined from opportunistic sensor data. Place hints include words spoken by people, text written on signs or objects recognized in the environment. We evaluate CSP with a seven-week, 36-user experiment involving 1,241 places in five locations around the world. Our results show that CSP can classify places into a variety of categories with an overall accuracy of 69%, outperforming currently available alternative solutions.