Extracting information from multimedia meeting collections

  • Authors:
  • Daniel Gatica-Perez;Dong Zhang;Samy Bengio

  • Affiliations:
  • IDIAP Research Institute, Martigny, Switzerland;IDIAP Research Institute, Martigny, Switzerland;IDIAP Research Institute, Martigny, Switzerland

  • Venue:
  • Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multimedia meeting collections, composed of unedited audio and video streams, handwritten notes, slides, and electronic documents that jointly constitute a raw record of complex human interaction processes in the workplace, have attracted interest due to the increasing feasibility of recording them in large quantities, by the opportunities for information access and retrieval applications derived from the automatic extraction of relevant meeting information, and by the challenges that the extraction of semantic information from real human activities entails. In this paper, we present a succint overview of recent approaches in this field, largely influenced by our own experiences. We first review some of the existing and potential needs for users of multimedia meeting information systems. We then summarize recent work on various research areas addressing some of these requirements. In more detail, we describe our work on automatic analysis of human interaction patterns from audio-visual sensors, discussing open issues in this domain.