Object recognition by computer: the role of geometric constraints
Object recognition by computer: the role of geometric constraints
Direct estimation of local surface shape in a fixating binocular vision system
ECCV '94 Proceedings of the third European conference on Computer vision (vol. 1)
3D object recognition using invariance
Artificial Intelligence - Special volume on computer vision
SUSAN—A New Approach to Low Level Image Processing
International Journal of Computer Vision
How Easy is Matching 2D Line Models Using Local Search?
IEEE Transactions on Pattern Analysis and Machine Intelligence
X Vision: a portable substrate for real-time vision applications
Computer Vision and Image Understanding
Geometric and Illumination Invariants for Object Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Computational Model of View Degeneracy
IEEE Transactions on Pattern Analysis and Machine Intelligence
Indexing without Invariants in 3D Object Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust Vision for Vision-Based Control of Motion
Robust Vision for Vision-Based Control of Motion
An Integrated Framework for Robust Real-Time 3D Object Tracking
ICVS '99 Proceedings of the First International Conference on Computer Vision Systems
Optimal Image Processing Architecture for Active Vision
ICVS '99 Proceedings of the First International Conference on Computer Vision Systems
Factors Affecting the Accuracy of an Active Vision Head
SETN '02 Proceedings of the Second Hellenic Conference on AI: Methods and Applications of Artificial Intelligence
Hi-index | 0.00 |
A prototype system has been built to navigate a walking robot into a ship structure. The robot is equipped with a stereo head for monocular and stereo vision. From the CAD-model of the ship good viewpoints are selected such that the head can look at locations with sufficient features. The edge features for the views are extracted automatically. The pose of the robot is estimated from the features detected by two vision approaches. One approach searches in the full image for junctions and uses the stereo information to extract 3D information. The other method is monocular and tracks 2D edge features. To achieve robust tracking of the features a model-based tracking approach is enhanced with a method of Edge Projected Integration of Cues (EPIC). EPIC uses object knowledge to select the correct features in real-time. The two vision systems are synchronised by sending the images over a fibre channel network. The pose estimation uses both the 2D and 3D features and locates the robot within a few centimetres over the range of ship cells of several metres. Gyros are used to stabilise the head while the robot moves. The system has been developed within the RobVision project and the results of the final demonstration are given.