Georgia tech gesture toolkit: supporting experiments in gesture recognition
Proceedings of the 5th international conference on Multimodal interfaces
a CAPpella: programming by demonstration of context-aware applications
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
DART: a toolkit for rapid design exploration of augmented reality experiences
Proceedings of the 17th annual ACM symposium on User interface software and technology
GAINER: a reconfigurable I/O module and software libraries for education
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
Hi-index | 0.00 |
Multimedia interactive contents that can be controlled by user's motion attract a great deal of attention especially in entertainment such as gesture-based games. A system that provides such interactive contents detects the human motions using several body-worn sensors. To develop such a system, the contents creator must have enough knowledge about various sensors. In addition, since sensors and contents are deeply associated in contents, it is difficult to change/add sensors for such contents. In this paper, we propose a framework that helps contents creators who do not have enough knowledge on sensors. In our framework, an interactive content is divided into two layers; sensor management layer and content layer. We confirmed that creators can create interactive contents easier with our framework.