The CAVE: audio visual experience automatic virtual environment
Communications of the ACM
Learning nonlinear dynamical systems using an EM algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
Learning and Recognizing Human Dynamics in Video Sequences
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
The ALIVE system: wireless, full-body interaction with autonomous agents
Multimedia Systems - Special issue on multimedia and multisensory virtual worlds
The convergence of alternate controllers and musical interfaces in interactive entertainment
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
Detection and modeling of transient audio signals with prior information
Detection and modeling of transient audio signals with prior information
SMALLab: a mediated platform for education
ACM SIGGRAPH 2006 Educators program
Movement-based interactive dance performance
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
A real-time, multimodal biofeedback system for stroke patient rehabilitation
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Hi-index | 0.00 |
The mixed-reality environment, or hybrid physical-digital space, is an emerging human-computer interaction paradigm with great potential to support constructive learning in everyday settings as a complement to traditional classroom methods. Key advantages over screen-based media and immersive (virtual-reality) environments include a) dynamic, multimodal feedback, which engages diverse learning styles through multiple modes of representation; b) affordance of unencumbered, full-body movement, which enables interactions to be physically embodied; and c) physical continuity with the classroom, which fosters informal collaborative and social interactions. For this potential to be realized, however, we must address significant challenges in interaction design. We must develop modes of interaction which are implicitly learnable, which afford full-body movement, and which are cognitively well adapted to large physical spaces. Furthermore, we must create a mechanism by which students can reflect on what they have "constructed" through interacting with the space, so that implicit learning can be leveraged in the interest of explicit understanding. To these ends, we have developed a novel interaction paradigm based around 3D path shape qualities: straight, curved, random, and stop, which describe motion of an illuminated object (glowball) guided by the participant through the space. We infer these qualities in real-time (online) and also offline using a robust Bayesian framework operating on a low-cost, non-intrusive video sensing apparatus. Online inference drives the interaction and offline segmentation the post-interaction display, our mechanism for reflection where segmentation results are mapped onto a physical trace of the participant.s motion. An informal study a) validates the implicit learnability of straight, curved, and stop mappings based on shape quality controls, and b) highlights the comparative advantage of the postinteraction display for all mappings, when subjects are asked to identify the actions responsible for specific target outcomes.