Audio-visual fused online context analysis toward smart meeting room

  • Authors:
  • Peng Dai;Linmi Tao;Guangyou Xu

  • Affiliations:
  • Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China;Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China;Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China

  • Venue:
  • UIC'07 Proceedings of the 4th international conference on Ubiquitous Intelligence and Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Context-aware systems incorporate multimodal information to analyze contextual information in users' environment and provide various proactive services according to dynamic context. In this paper, a novel online context analysis framework is proposed to support context-aware computing in smart meeting room. A novel dynamic context model is presented to model human group interactions. Robust audio and visual modules are integrated for the effective processing of multimodal signals from various sensors, based on which a multi-level dynamic context reasoning mechanism is adopted for the online understanding of group interactions in meeting scenarios. Experimental results show the effectiveness of our framework.