Linear-Time Longest-Common-Prefix Computation in Suffix Arrays and Its Applications
CPM '01 Proceedings of the 12th Annual Symposium on Combinatorial Pattern Matching
Practical Multi-Resolution Source Coding: TSVQ
DCC '98 Proceedings of the Conference on Data Compression
Replacing suffix trees with enhanced suffix arrays
Journal of Discrete Algorithms - SPIRE 2002
Simple linear work suffix array construction
ICALP'03 Proceedings of the 30th international conference on Automata, languages and programming
Joint ACM workshop on human gesture and behavior understanding: (J-HGBU'11)
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Hi-index | 0.00 |
Human motion understanding based on motion capture (mocap) data is investigated. Recent rapid developments and applications of mocap systems have resulted in a large corpus of mocap sequences, and an automated annotation technique that can classify basic motion types into multiple categories is needed. A novel technique for automated mocap data classification is developed in this work. Specifically, we adopt the tree-structured vector quantization (TSVQ) method to approximate human poses by codewords and approximate the dynamics of mocap sequences by a codeword sequence. To classify mocap data into different categories, we consider three approaches: 1) the spatial domain approach based on the histogram of codewords, 2) the spatial-time domain approach via codeword sequence matching, and 3) a decision fusion approach. We test the proposed algorithm on the CMU mocap database using the n-fold cross validation procedure and obtain a correct classification rate of 97%.