Injecting life into toys

  • Authors:
  • Songchun Fan;Hyojeong Shin;Romit Roy Choudhury

  • Affiliations:
  • Duke University, Durham;Duke University, Durham;University of Illinois, Urbana-Champaign

  • Venue:
  • Proceedings of the 15th Workshop on Mobile Computing Systems and Applications
  • Year:
  • 2014

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper envisions a future in which smartphones can be inserted into toys, such as a teddy bear, to make them interactive to children. Our idea is to leverage the smartphones' sensors to sense children's gestures, cues, and reactions, and interact back through acoustics, vibration, and when possible, the smartphone display. This paper is an attempt to explore this vision, ponder on applications, and take the first steps towards addressing some of the challenges. Our limited measurements from actual kids indicate that each child is quite unique in his/her "gesture vocabulary", motivating the need for personalized models. To learn these models, we employ signal processing-based approaches that first identify the presence of a gesture in a phone's sensor stream, and then learn its patterns for reliable classification. Our approach does not require manual supervision (i.e., the child is not asked to make any specific gesture); the phone detects and learns through observation and feedback. Our prototype, while far from a complete system, exhibits promise -- we now believe that an unsupervised sensing approach can enable new kinds of child-toy interactions.