An adaptive vision system toward implicit human computer interaction

  • Authors:
  • Peng Dai;Linmi Tao;Xiang Zhang;Ligeng Dong;Guangyou Xu

  • Affiliations:
  • Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China;Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China;Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China;Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China;Tsinghua National Lab on Information Science & Technology, Tsinghua University, Beijing, China

  • Venue:
  • UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In implicit human computer interaction, computers are required to understand users' actions and intentions so as to provide proactive services. Visual processing has to detect and understand human actions and then transform them as the implicit input. In this paper an adaptive vision system is presented to solve visual processing tasks in dynamic meeting context. Visual modules and dynamic context analysis tasks are organized in a bidirectional scheme. Firstly human objects are detected and tracked to generate global features. Secondly current meeting scenario is inferred based on these global features, and in some specific scenarios face and hand blob level visual processing tasks are fulfilled to extract visual information for the analysis of individual and interactive events, which can further be adopted as implicit input to the computer system. The experiments in our smart meeting room demonstrate the effectiveness of the proposed framework.