Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Interactive motion generation from examples
Proceedings of the 29th annual conference on Computer graphics and interactive techniques
Verbs and Adverbs: Multidimensional Motion Interpolation
IEEE Computer Graphics and Applications
Segmenting motion capture data into distinct behaviors
GI '04 Proceedings of the 2004 Graphics Interface Conference
Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces
ACM SIGGRAPH 2004 Papers
Exact and efficient Bayesian inference for multiple changepoint problems
Statistics and Computing
Toward the Study of Sign Language Coarticulation: Methodology Proposal
ACHI '09 Proceedings of the 2009 Second International Conferences on Advances in Computer-Human Interactions
Feature extraction from spike trains with Bayesian binning: `Latency is where the signal starts'
Journal of Computational Neuroscience
Factor graphs and the sum-product algorithm
IEEE Transactions on Information Theory
Bayesian bin distribution inference and mutual information
IEEE Transactions on Information Theory
Proceedings of the ACM Symposium on Applied Perception
Hi-index | 0.00 |
Natural body movements arise in the form of temporal sequences of individual actions. During visual action analysis, the human visual system must accomplish a temporal segmentation of the action stream into individual actions. Such temporal segmentation is also essential to build hierarchical models for action synthesis in computer animation. Ideally, such segmentations should be computed automatically in an unsupervised manner. We present an unsupervised segmentation algorithm that is based on Bayesian Binning (BB) and compare it to human segmentations derived from psychophysical data. BB has the advantage that the observation model can be easily exchanged. Moreover, being an exact Bayesian method, BB allows for the automatic determination of the number and positions of segmentation points. We applied this method to motion capture sequences from martial arts and compared the results to segmentations provided by humans from movies that showed characters that were animated with the motion capture data. Human segmentation was then assessed by an interactive adjustment paradigm, where participants had to indicate segmentation points by selection of the relevant frames. Results show a good agreement between automatically generated segmentations and human performance when the trajectory segments between the transition points were modeled by polynomials of at least third order. This result is consistent with theories about differential invariants of human movements.