Non-parametric Similarity Measures for Unsupervised Texture Segmentation and Image Retrieval
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Learning to Detect User Activity and Availability from a Variety of Sensor Data
PERCOM '04 Proceedings of the Second IEEE International Conference on Pervasive Computing and Communications (PerCom'04)
Multimodal group action clustering in meetings
Proceedings of the ACM 2nd international workshop on Video surveillance & sensor networks
Automatic Analysis of Multimodal Group Actions in Meetings
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic detection of interaction groups
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Learning situation models for providing context-aware services
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Hi-index | 0.00 |
This paper addresses the extraction of small group configurations and activities in an intelligent meeting environment. The proposed approach takes a continuous stream of observations coming from different sensors in the environment as input. The goal is to separate distinct distributions of these observations corresponding to distinct group configurations and activities. In this paper, we explore an unsupervised method based on the calculation of the Jeffrey divergence between histograms over observations. The obtained distinct distributions of observations can be interpreted as distinct segments of group configuration and activity. To evaluate this approach, we recorded a seminar and a cocktail party meeting. The observations of the seminar were generated by a speech activity detector, while the observations of the cocktail party meeting were generated by both the speech activity detector and a visual tracking system. We measured the correspondence between detected segments and labelled group configurations and activities. The obtained results are promising, in particular as our method is completely unsupervised.