VAMBAM: view and motion -based aspect models for distributed omnidirectional vision systems

  • Authors:
  • Hiroshi Ishiguro;Takuichi Nishimura

  • Affiliations:
  • Department of Computer & Communication Sciences, Wakayama University, Japan;Cyber Assist Research Center, National Institute for Advanced Industrial Science and Technology

  • Venue:
  • IJCAI'01 Proceedings of the 17th international joint conference on Artificial intelligence - Volume 2
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a new model for gesture recognition. The model, called view and motion -based aspect models (VAMBAM), is an omnidirectional view-based aspect model based on motion-based segmentation. This model realizes location-free and rotation-free gesture recognition with a distributed omnidirectional vision system (DOVS). The distributed vision system consisting of multiple omnidirectional cameras is a prototype of a perceptual information infrastructure for monitoring and recognizing the real world. In addition to the concept of VABAM, this paper shows how the model realizes robust and real-time visual recognition of the DOVS.