On the design and prototype implementation of a multimodal situation aware system

  • Authors:
  • Rajesh M. Hegde;Joseph Kurniawan;Bhaskar D. Rao

  • Affiliations:
  • Department of Electrical Engineering, Indian Institute of Technology, Kanpur, India;Networks in Motion, Aliso Viejo, CA;Department of Electrical and Computer Engineering, University of California at San Diego, La Jolla, CA

  • Venue:
  • IEEE Transactions on Multimedia
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper we describe the design concepts and prototype implementation of a situation aware ubiquitous computing system using multiple modalities such as National Marine Electronics Association (NMEA) data from global positioning system (GPS) receivers, text, speech, environmental audio, and handwriting inputs. While most mobile and communication devices know where and who they are, by accessing context information primarily in the form of location, time stamps, and user identity, the concept of sharing of this information in a reliable and intelligent fashion is crucial in many scenarios. A framework which takes the concept of context aware computing to the level of situation aware computing by intelligent information exchange between context aware devices is designed and implemented in this work. Four sensual modes of contextual information like text, speech, environmental audio, and handwriting are augmented to conventional contextual information sources like location from GPS, user identity based on IP addresses (IPA), and time stamps. Each device derives its context not necessarily using the same criteria or parameters but by employing selective fusion and fission of multiple modalities. The processing of each individual modality takes place at the client device followed by the summarization of context as a text file. Exchange of dynamic context information between devices is enabled in real time to create multimodal situation aware devices. A central repository of all user context profiles is also created to enable self-learning devices in the future. Based on the results of simulated situations and real field deployments it is shown that the use of multiple modalities like speech, environmental audio, and handwriting inputs along with conventional modalities can create devices with enhanced situational awareness.