Elements of information theory
Elements of information theory
Fundamentals of statistical signal processing: estimation theory
Fundamentals of statistical signal processing: estimation theory
A survey of computer vision-based human motion capture
Computer Vision and Image Understanding - Modeling people toward vision-based underatanding of a person's shape, appearance, and movement
Learning and Recognizing Human Dynamics in Video Sequences
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Dynamic bayesian networks: representation, inference and learning
Dynamic bayesian networks: representation, inference and learning
Detection and modeling of transient audio signals with prior information
Detection and modeling of transient audio signals with prior information
conga: a framework for adaptive conducting gesture analysis
NIME '06 Proceedings of the 2006 conference on New interfaces for musical expression
3D People Tracking with Gaussian Process Dynamical Models
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
A survey of advances in vision-based human motion capture and analysis
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Journal of Cognitive Neuroscience
The Role of Iconic Gestures in Speech Disambiguation: ERP Evidence
Journal of Cognitive Neuroscience
A tutorial on particle filters for online nonlinear/non-GaussianBayesian tracking
IEEE Transactions on Signal Processing
Hi-index | 0.00 |
Many event sequences in everyday human movement exhibit temporal structure: for instance, footsteps in walking, the striking of balls in a tennis match, the movements of a dancer set to rhythmic music, and the gestures of an orchestra conductor. These events generate prior expectancies regarding the occurrence of future events. Moreover, these expectancies play a critical role in conveying expressive qualities and communicative intent through the movement; thus they are of considerable interest in musical control contexts. To this end, we introduce a novel Bayesian framework which we call the temporal expectancy modeland use it to develop an analysis tool for capturing expressiveand indicativequalities of the conducting gesture based on temporal expectancies. The temporal expectancy model is a general dynamic Bayesian network (DBN) that can be used to encode prior knowledge regarding temporal structure to improve event segmentation. The conducting analysis tool infers beat and tempo, which are indicative and articulation which is expressive, as well as temporal expectancies regarding beat (ictusand preparationinstances) from conducting gesture. Experimental results using our analysis framework reveal a very strong correlation in how significantly the preparation expectancy builds up for staccato vs legato articulation, which bolsters the case for temporal expectancy as cognitive model for event anticipation, and as a key factor in the communication of expressive qualities of conducting gesture. Our system operates on data obtained from a marker based motion capture system, but can be easily adapted for more affordable technologies like video camera arrays.