Using vanishing points for camera calibration
International Journal of Computer Vision
Cooperation of the inertial and visual systems
Traditional and non-traditional robotic sensors
Camera Calibration by Vanishing Lines for 3-D Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computation of inertial information on a Robot
The fifth international symposium on Robotics research
New method for vanishing point detection
CVGIP: Image Understanding
Geometric computation for machine vision
Geometric computation for machine vision
Computational geometry in C
Camera calibration of a head-eye system for active vision
ECCV '94 Proceedings of the third European conference on Computer vision (vol. 1)
SUSAN—A New Approach to Low Level Image Processing
International Journal of Computer Vision
The VRML 2.0 sourcebook (2nd ed.)
The VRML 2.0 sourcebook (2nd ed.)
Vehicles capable of dynamic vision: a new breed of technical beings?
Artificial Intelligence - Special issue: artificial intelligence 40 years later
Oculo-motor stabilization reflexes: integration of inertial and visual information
Neural Networks - Special issue on neural control and robotics: biology and technology
Multiple view geometry in computer visiond
Multiple view geometry in computer visiond
A Few Steps Towards 3d Active Vision
A Few Steps Towards 3d Active Vision
Hybrid Inertial and Vision Tracking for Augmented Reality Registration
VR '99 Proceedings of the IEEE Virtual Reality
Pose imagery and automated three-dimensional modeling of urban environments
Pose imagery and automated three-dimensional modeling of urban environments
Real-Time Hybrid Pose Estimation from Vision and Inertial Data
CRV '04 Proceedings of the 1st Canadian Conference on Computer and Robot Vision
Motion segmentation using inertial sensors
Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications
An Introduction to Inertial and Visual Sensing
International Journal of Robotics Research
Simultaneous Motion and Structure Estimation by Fusion of Inertial and Vision Data
International Journal of Robotics Research
Relative Pose Calibration Between Visual and Inertial Sensors
International Journal of Robotics Research
Inertially Aided Visual Odometry for Miniature Air Vehicles in GPS-denied Environments
Journal of Intelligent and Robotic Systems
Inertial-aided KLT feature tracking for a moving camera
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
A New Solution to the Relative Orientation Problem Using Only 3 Points and the Vertical Direction
Journal of Mathematical Imaging and Vision
Gyro-aided feature tracking for a moving camera: fusion, auto-calibration and GPU implementation
International Journal of Robotics Research
ORIENT-CAM, a camera that knows its orientation and some applications
CIARP'06 Proceedings of the 11th Iberoamerican conference on Progress in Pattern Recognition, Image Analysis and Applications
Visual based human motion analysis: mapping gestures using a Puppet model
EPIA'05 Proceedings of the 12th Portuguese conference on Progress in Artificial Intelligence
Gesture recognition using a marionette model and dynamic bayesian networks (DBNs)
ICIAR'06 Proceedings of the Third international conference on Image Analysis and Recognition - Volume Part II
Augmented Reality: Handheld Augmented Reality involving gravity measurements
Computers and Graphics
Hi-index | 0.15 |
Abstract--This paper explores the combination of inertial sensor data with vision. Visual and inertial sensing are two sensory modalities that can be explored to give robust solutions on image segmentation and recovery of 3D structure from images, increasing the capabilities of autonomous robots and enlarging the application potential of vision systems. In biological systems, the information provided by the vestibular system is fused at a very early processing stage with vision, playing a key role on the execution of visual movements such as gaze holding and tracking, and the visual cues aid the spatial orientation and body equilibrium. In this paper, we set a framework for using inertial sensor data in vision systems, and describe some results obtained. The unit sphere projection camera model is used, providing a simple model for inertial data integration. Using the vertical reference provided by the inertial sensors, the image horizon line can be determined. Using just one vanishing point and the vertical, we can recover the camera's focal distance and provide an external bearing for the system's navigation frame of reference. Knowing the geometry of a stereo rig and its pose from the inertial sensors, the collineation of level planes can be recovered, providing enough restrictions to segment and reconstruct vertical features and leveled planar patches.