Multimedia multimodal methodologies

  • Authors:
  • L. Guan;P. Muneesawang;Y. Wang;R. Zhang;Y. Tie;A. Bulzacki;M. T. Ibrahim

  • Affiliations:
  • Ryerson Multimedia Laboratory, Ryerson University, Toronto, Canada;Naresuan University, Thailand;Department of Electrical and Computer Engineering, University of Toronto, Canada;Ryerson Multimedia Laboratory, Ryerson University, Toronto, Canada;Ryerson Multimedia Laboratory, Ryerson University, Toronto, Canada;Ryerson Multimedia Laboratory, Ryerson University, Toronto, Canada;Ryerson Multimedia Laboratory, Ryerson University, Toronto, Canada

  • Venue:
  • ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper outlines several multimedia systems that utilize a multimodal approach. These systems include audiovisual based emotion recognition, image and video retrieval, and face and head tracking. Data collected from diverse sources/sensors are employed to improve the accuracy of correctly detecting, classifying, identifying, and tracking of a desired object or target. It is shown that the integration of multimodality data will be more efficient and potentially more accurate than if the data was acquired from a single source. A number of cutting-edge applications for multimodal systems will be discussed. An advanced assistance robot using the multimodal systems will be presented.