Ontology and Taxonomy Collaborated Framework for Meeting Classification
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real-Time Multi-View Face Detection and Pose Estimation in Video Stream
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 04
The AMI meeting corpus: a pre-announcement
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Modeling individual and group actions in meetings with layered HMMs
IEEE Transactions on Multimedia
Dynamic context capture and distributed video arrays for intelligent spaces
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Context-aware computing for assistive meeting system
Proceedings of the 1st international conference on PErvasive Technologies Related to Assistive Environments
Hi-index | 0.00 |
In implicit human computer interaction, computers are required to understand users' actions and intentions so as to provide proactive services. Visual processing has to detect and understand human actions and then transform them as the implicit input. In this paper an adaptive vision system is presented to solve visual processing tasks in dynamic meeting context. Visual modules and dynamic context analysis tasks are organized in a bidirectional scheme. Firstly human objects are detected and tracked to generate global features. Secondly current meeting scenario is inferred based on these global features, and in some specific scenarios face and hand blob level visual processing tasks are fulfilled to extract visual information for the analysis of individual and interactive events, which can further be adopted as implicit input to the computer system. The experiments in our smart meeting room demonstrate the effectiveness of the proposed framework.