The active badge location system
ACM Transactions on Information Systems (TOIS)
Elements of information theory
Elements of information theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning Patterns of Activity Using Real-Time Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical color models with application to skin detection
International Journal of Computer Vision
The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms
IEEE Pervasive Computing
Application Design for Wearable and Context-Aware Computers
IEEE Pervasive Computing
Video-Based Face Recognition Evaluation in the CHIL Project - Run 1
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Agent based middleware infrastructure for autonomous context-aware ubiquitous computing services
Computer Communications
Human-Computer Interaction
A decision fusion system across time and classifiers for audio-visual person identification
CLEAR'06 Proceedings of the 1st international evaluation conference on Classification of events, activities and relationships
A testing methodology for face recognition algorithms
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Foreground regions extraction and characterization towards real-time object tracking
MLMI'05 Proceedings of the Second international conference on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
Identifying people and tracking their locations is a key prerequisite to achieving context awareness in smart spaces. Moreover, in realistic context-aware applications, these tasks have to be carried out in a non-obtrusive fashion. In this paper we present a set of robust person-identification and tracking algorithms, based on audio and visual processing. A main characteristic of these algorithms is that they operate on far-field and un-constrained audio---visual streams, which ensure that they are non-intrusive. We also illustrate that the combination of their outputs can lead to composite multimodal tracking components, which are suitable for supporting a broad range of context-aware services. In combining audio---visual processing results, we exploit a context-modeling approach based on a graph of situations. Accordingly, we discuss the implementation of realistic prototype applications that make use of the full range of audio, visual and multimodal algorithms.