A Middleware Infrastructure for Active Spaces
IEEE Pervasive Computing
Perceptual Components for Context Aware Computing
UbiComp '02 Proceedings of the 4th international conference on Ubiquitous Computing
Human-Computer Interaction
Meeting state recognition from visual and aural labels
MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
UbiREAL: realistic smartspace simulator for systematic testing
UbiComp'06 Proceedings of the 8th international conference on Ubiquitous Computing
The connector service-predicting availability in mobile contexts
MLMI'06 Proceedings of the Third international conference on Machine Learning for Multimodal Interaction
Hi-index | 0.00 |
In this paper we present a novel approach for building intelligent services in smart rooms - i.e. spaces equipped with different sets of sensors, including audio and visual perception. Designing such multi-modal perceptual systems is a non-trivial task which involves interdisciplinary effort dealing with integration of voice and image recognition technologies, situation modeling middleware and context-aware multi-user interfaces into robust and self-manageable software framework.The inherent complexity associated with building such systems aiming at perception and understanding of behavior of people in smart spaces makes it a very difficult undertaking. Hence this fresh research arena currently suffers with immature architectural models, weak system testability, and challenging component maintainability. In addition, traditional methodologies of design lifecycle such as "bottom-up" or "top-down" fall short because of the aspects mentioned.