Journal of Intelligent Information Systems
Rotation invariant spherical harmonic representation of 3D shape descriptors
Proceedings of the 2003 Eurographics/ACM SIGGRAPH symposium on Geometry processing
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Deformation transfer for triangle meshes
ACM SIGGRAPH 2004 Papers
Shape Topics: A Compact Representation and New Algorithms for 3D Partial Shape Retrieval
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
A survey of content based 3D shape retrieval methods
Multimedia Tools and Applications
An Intrinsic Framework for Analysis of Facial Surfaces
International Journal of Computer Vision
EMMCVPR'07 Proceedings of the 6th international conference on Energy minimization methods in computer vision and pattern recognition
Visual Similarity Based 3D Shape Retrieval Using Bag-of-Features
SMI '10 Proceedings of the 2010 Shape Modeling International Conference
Geodesics between 3d closed curves using path-straightening
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Compact vectors of locally aggregated tensors for 3D shape retrieval
3DOR '13 Proceedings of the Sixth Eurographics Workshop on 3D Object Retrieval
Hi-index | 0.00 |
We present a novel method for 3D-object retrieval using Bag of Feature (BoF) approaches [8]. The method starts by selecting and then describing a set of points from the 3D-object. The proposed descriptor is an indexed collection of closed curves in R3 on the 3D-surface. Such descriptor has the advantage of being invariant to different transformations that a shape can undergo. Based on vector quantization, we cluster those descriptors to form a shape vocabulary. Then, each point selected in the object is associated to a cluster (word) in that vocabulary. Finally, a BoF histogram counting the occurrences of every word is computed. In order to assess our method, we used shapes from the TOSCA and Sumner datasets. The results clearly demonstrate that the method is robust to many kind of transformations and produces higher precision compared with some state-of-the-art methods.