Tracking and data association
The CAVE: audio visual experience automatic virtual environment
Communications of the ACM
Artificial intelligence: a modern approach
Artificial intelligence: a modern approach
Tangible bits: towards seamless interfaces between people, bits and atoms
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Pfinder: Real-Time Tracking of the Human Body
IEEE Transactions on Pattern Analysis and Machine Intelligence
The EMOTE model for effort and shape
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Where the action is: the foundations of embodied interaction
Where the action is: the foundations of embodied interaction
Kalman Filtering and Neural Networks
Kalman Filtering and Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
The human-computer interaction handbook
Invited Speech: "Gestural Interface to a visual computing Environment for Molecular biologists"
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Synthesis and acquisition of laban movement analysis qualitative parameters for communicative gestures
Advances in Robust Multimodal Interface Design
IEEE Computer Graphics and Applications
The ALIVE system: wireless, full-body interaction with autonomous agents
Multimedia Systems - Special issue on multimedia and multisensory virtual worlds
What we talk about when we talk about context
Personal and Ubiquitous Computing
Acquiring and validating motion qualities from live limb gestures
Graphical Models
The convergence of alternate controllers and musical interfaces in interactive entertainment
NIME '05 Proceedings of the 2005 conference on New interfaces for musical expression
IEEE Transactions on Visualization and Computer Graphics
Detection and modeling of transient audio signals with prior information
Detection and modeling of transient audio signals with prior information
3D People Tracking with Gaussian Process Dynamical Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
SMALLab: a mediated platform for education
ACM SIGGRAPH 2006 Educators program
Movement-based interactive dance performance
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
The design of a real-time, multimodal biofeedback system for stroke patient rehabilitation
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Microsound
Media adaptation framework in biofeedback system for stroke patient rehabilitation
Proceedings of the 15th international conference on Multimedia
Context-Aware Computing Applications
WMCSA '94 Proceedings of the 1994 First Workshop on Mobile Computing Systems and Applications
Human-Computer Interaction
Real-time automatic kinematic model building for optical motion capture using a Markov random field
HCI'07 Proceedings of the 2007 IEEE international conference on Human-computer interaction
Variational learning in mixed-state dynamic graphical models
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Reinforcement learning and the creative, automated music improviser
EvoMUSART'12 Proceedings of the First international conference on Evolutionary and Biologically Inspired Music, Sound, Art and Design
Hi-index | 0.00 |
Laban movement analysis (LMA) is a systematic framework for describing all forms of human movement and has been widely applied across animation, biomedicine, dance, and kinesiology. LMA (especially Effort/Shape) emphasizes how internal feelings and intentions govern the patterning of movement throughout the whole body. As we argue, a complex understanding of intention via LMA is necessary for human-computer interaction to become embodied in ways that resemble interaction in the physical world. We thus introduce a novel, flexible Bayesian fusion approach for identifying LMA Shape qualities from raw motion capture data in real time. The method uses a dynamic Bayesian network (DBN) to fuse movement features across the body and across time and as we discuss can be readily adapted for low-cost video. It has delivered excellent performance in preliminary studies comprising improvisatory movements. Our approach has been incorporated in Response, a mixed-reality environment where users interact via natural, full-body human movement and enhance their bodily-kinesthetic awareness through immersive sound and light feedback, with applications to kinesiology training, Parkinson's patient rehabilitation, interactive dance, and many other areas.