Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope
International Journal of Computer Vision
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part IV
A Unifying Theory for Central Panoramic Systems and Practical Applications
ECCV '00 Proceedings of the 6th European Conference on Computer Vision-Part II
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Wide-Baseline Stereo Matching with Line Segments
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Histograms of Oriented Gradients for Human Detection
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
From lines to epipoles through planes in two views
Pattern Recognition
Groups of Adjacent Contour Segments for Object Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
MSLD: A robust descriptor for line matching
Pattern Recognition
Sketch-based 3D shape retrieval
ACM SIGGRAPH 2010 Talks
Rotation estimation and vanishing point extraction by omnidirectional vision in urban environment
International Journal of Robotics Research
Hypercatadioptric line images for 3D orientation and image rectification
Robotics and Autonomous Systems
Hi-index | 0.00 |
Wearable computer vision systems provide plenty of opportunities to develop human assistive devices. This work contributes on visual scene understanding techniques using a helmet-mounted omnidirectional vision system. The goal is to extract semantic information of the environment, such as the type of environment being traversed or the basic 3D layout of the place, to build assistive navigation systems. We propose a novel line-based image global descriptor that encloses the structure of the scene observed. This descriptor is designed with omnidirectional imagery in mind, where observed lines are longer than in conventional images. Our experiments show that the proposed descriptor can be used for indoor scene recognition comparing its results to state-of-the-art global descriptors. Besides, we demonstrate additional advantages of particular interest for wearable vision systems: higher robustness to rotation, compactness, and easier integration with other scene understanding steps.