Edge detection and motion detection
Image and Vision Computing
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
3-D Scene Data Recovery Using Omnidirectional Multibaseline Stereo
International Journal of Computer Vision
Mobile robot navigation and scene modeling using stereo fish-eye lens system
Machine Vision and Applications
Nonmetric Calibration of Wide-Angle Lenses and Polycameras
IEEE Transactions on Pattern Analysis and Machine Intelligence
Paracatadioptric Camera Calibration
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiview Panoramic Cameras Using Mirror Pyramids
IEEE Transactions on Pattern Analysis and Machine Intelligence
Structure from Motion with Wide Circular Field of View Cameras
IEEE Transactions on Pattern Analysis and Machine Intelligence
Editorial: Mobile robotics in the UK and worldwide: Fast changing, and as exciting as ever
Robotics and Autonomous Systems
Modeling floor-cleaning coverage performances of some domestic mobile robots in a reduced scenario
Robotics and Autonomous Systems
Super-resolution image reconstruction for omni-vision based on POCS
CCDC'09 Proceedings of the 21st annual international conference on Chinese control and decision conference
Virtual machine vision in computer aided robotics
ETFA'09 Proceedings of the 14th IEEE international conference on Emerging technologies & factory automation
On improved calibration method for the catadioptric omnidirectional vision with a single viewpoint
Multimedia Tools and Applications
FPGA-based image processing for omnidirectional vision on mobile robots
Proceedings of the 24th symposium on Integrated circuits and systems design
Image registration for foveated panoramic sensing
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
An FPGA-based omnidirectional vision sensor for motion detection on mobile robots
International Journal of Reconfigurable Computing - Special issue on Selected Papers from the Symposium on Integrated Circuits and Systems Design (SBCCI 2011)
Hi-index | 0.00 |
This paper presents two related methods for autonomous visualguidance of robots: localization by trilateration, and interframemotion estimation. Both methods use coaxial omnidirectionalstereopsis (omnistereo), which returns the range r to objects orguiding points detected in the images. The trilateration methodachieves self-localization using r from the three nearest objectsat known positions. The interframe motion estimation is moregeneral, being able to use any features in an unknown environment.The guiding points are detected automatically on the basis of theirperceptual significance and thus they need not have either specialmarkings or be placed at known locations. The interframe motionestimation does not require previous motion history, making it wellsuited for detecting acceleration (in 20th of a second) and thussupporting dynamic models of robot's motion which will gain inimportance when autonomous robots achieve useful speeds. An initialestimate of the robot's rotation ω (the visual compass) isobtained from the angular optic flow in an omnidirectional image. Anew noniterative optic flow method has been developed for thispurpose. Adding ω to all observed (robot relative) bearingsθ gives true bearings towards objects (relative to a fixedcoordinate frame). The rotation ω and the r,θcoordinates obtained at two frames for a single fixed point atunknown location are sufficient to estimate the translation of therobot. However, a large number of guiding points are typicallydetected and matched in most real images. Each such point providesa solution for the robot's translation. The solutions are combinedby a robust clustering algorithm Clumat that reduces rotation andtranslation errors. Simulator experiments are included for all thepresented methods. Real images obtained from ScitosG5 autonomouslymoving robot were used to test the interframe rotation and to showthat the presented vision methods are applicable to real images inreal robotics scenarios.