Digital image processing
Local Grayvalue Invariants for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Space/time trade-offs in hash coding with allowable errors
Communications of the ACM
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Object Recognition from Local Scale-Invariant Features
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
Recognizing Action at a Distance
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
A Survey of Content Based 3D Shape Retrieval Methods
SMI '04 Proceedings of the Shape Modeling International 2004
Content-Based Retrieval of 3D Models: Feature Extraction and Representation
MMM '05 Proceedings of the 11th International Multimedia Modelling Conference
Efficient Visual Event Detection Using Volumetric Features
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
TEEVE: The Next Generation Architecture for Tele-immersive Environment
ISM '05 Proceedings of the Seventh IEEE International Symposium on Multimedia
Motion segmentation and retrieval for 3D video based on modified shape distribution
EURASIP Journal on Applied Signal Processing
Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words
International Journal of Computer Vision
Three-dimensional shape searching: state-of-the-art review and future trends
Computer-Aided Design
Hi-index | 0.00 |
Tele-immersive systems, are growing in popularity and sophistication. They generate 3D video content in large scale, yielding challenges for executing data-mining tasks. Some of the tasks include classification of actions, recognizing and learning actor movements and so on. Fundamentally, these tasks require tagging and identifying of the features present in the tele-immersive 3D videos. We target the problem of 3D feature extraction, a relatively unexplored direction. In this paper we propose Samera, a scalable and memory-efficient feature extraction algorithm which works on short 3D video segments. The focus is on relevant portions of each frame, then uses a flow based technique across frames (in a short video segment) to extract features. Finally it is scalable, by representing the constructed feature vector as a binary vector using Bloom Filters. The results obtained from experiments performed on 3D video segments obtained from Laban Movement Analysis (LMA) show that the compression ratio achieved in Samera is 147.5 as compared to the original 3D videos.