The connector: facilitating context-aware communication
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
The rich transcription 2006 spring meeting recognition evaluation
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Audio-visual technologies for lecture and meeting analysis inside smart rooms
VisHCI '06 Proceedings of the HCSNet workshop on Use of vision in human-computer interaction - Volume 56
Multimodal Technologies for Perception of Humans
The AIT 2D Face Detection and Tracking System for CLEAR 2007
Multimodal Technologies for Perception of Humans
The AIT Multimodal Person Identification System for CLEAR 2007
Multimodal Technologies for Perception of Humans
The Rich Transcription 2007 Meeting Recognition Evaluation
Multimodal Technologies for Perception of Humans
Subclass linear discriminant analysis for video-based face recognition
Journal of Visual Communication and Image Representation
Audio-visual active speaker tracking in cluttered indoors environments
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on human computing
Face recognition in smart rooms
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
Augmenting video surveillance footage with virtual agents for incremental event evaluation
Pattern Recognition Letters
Multiple human tracking system for unpredictable trajectories
Machine Vision and Applications
Low-resolution face recognition: a review
The Visual Computer: International Journal of Computer Graphics
Hi-index | 0.00 |
This paper is a summary of the first CLEAR evaluation on CLassification of Events, Activities and Relationships - which took place in early 2006 and concluded with a two day evaluation workshop in April 2006. CLEAR is an international effort to evaluate systems for the multimodal perception of people, their activities and interactions. It provides a new international evaluation framework for such technologies. It aims to support the definition of common evaluation tasks and metrics, to coordinate and leverage the production of necessary multimodal corpora and to provide a possibility for comparing different algorithms and approaches on common benchmarks, which will result in faster progress in the research community. This paper describes the evaluation tasks, including metrics and databases used, that were conducted in CLEAR 2006, and provides an overview of the results. The evaluation tasks in CLEAR 2006 included person tracking, face detection and tracking, person identification, head pose estimation, vehicle tracking as well as acoustic scene analysis. Overall, more than 20 subtasks were conducted, which included acoustic, visual and audio-visual analysis for many of the main tasks, as well as different data domains and evaluation conditions.