Browsing the environment with the SNAP&TELL wearable computer system

  • Authors:
  • Trish Keaton;M. Dominguez;H. Sayed

  • Affiliations:
  • Information Sciences Laboratory, HRL Laboratories, LLC, USA;Electrical Engineering Department, University of California Los Angeles, USA;Electrical Engineering Department, University of California Los Angeles, USA

  • Venue:
  • Personal and Ubiquitous Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper provides an overview of a multi-modal wearable computersystem, SNAP&TELL. The system performs real-time gesturetracking, combined with audio-based control commands, in order torecognize objects in an environment, including outdoor landmarks.The system uses a single camera to capture images, which are thenprocessed to perform color segmentation, fingertip shape analysis,robust tracking, and invariant object recognition, in order toquickly identify the objects encircled and SNAPped by the userspointing gesture. In addition, the system returns an audionarration, TELLing the user information concerning the objectsclassification, historical facts, usage, etc. This system providesenabling technology for the design of intelligent assistants tosupport Web-On-The-World applications, with potential uses such astravel assistance, business advertisement, the design of smartliving and working spaces, and pervasive wireless services andinternet vehicles.