Modeling focus of attention for meeting indexing
MULTIMEDIA '99 Proceedings of the seventh ACM international conference on Multimedia (Part 1)
Linking by interacting: a paradigm for authoring hypertext
HYPERTEXT '00 Proceedings of the eleventh ACM on Hypertext and hypermedia
Distributed mediation of ambiguous context in aware environments
Proceedings of the 15th annual ACM symposium on User interface software and technology
Distributed meetings: a meeting capture and broadcasting system
Proceedings of the tenth ACM international conference on Multimedia
Meeting Capture in a Media Enriched Conference Room
CoBuild '99 Proceedings of the Second International Workshop on Cooperative Buildings, Integrating Information, Organization, and Architecture
The Aware Home: A Living Laboratory for Ubiquitous Computing Research
CoBuild '99 Proceedings of the Second International Workshop on Cooperative Buildings, Integrating Information, Organization, and Architecture
Human-Computer Interaction
A pattern mining method for interpretation of interaction
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Enabling communications-based interactive storytelling through a tangible mapping approach
ICVS'05 Proceedings of the Third international conference on Virtual Storytelling: using virtual reality technologies for storytelling
Hi-index | 0.00 |
We are developing the technology for an interaction corpus, a huge collection of human interaction data captured by various sensors with their machine-readable indices, in order to freely record various episodes in almost all parts of our daily life. To develop such a corpus, we have prototyped ubiquitous/wearable sensor systems that collaboratively capture human interactions from multiple points of view. The purpose of this study is to develop a systematic framework in which various applications can deal with human contexts represented as machine-readable indices. This is done in a uniform manner by explicitly separating the acquired raw data from various sensors from application semantics. This makes it possible to bridge the gaps among the context levels of data required by various applications and to capture human interactions in various situations. This paper proposes a layered model for human interaction interpretations based on a bottom-up approach. In this model, interpretations of human interactions are hierarchically abstracted so that each layer has unique semantic/syntactic information represented by machine-readable indices. We illustrate the use of our architecture through three sample applications each of which provides persons with rich opportunities for sharing their own experiences with others at a poster exhibition site. Moreover, we demonstrate the potential applicability and versatility of our approach by extending our system to another domain, a meeting situation.