Active Tracking Strategy for Monocular Depth Inference over Multiple Frames
IEEE Transactions on Pattern Analysis and Machine Intelligence
Performance of optical flow techniques
International Journal of Computer Vision
Estimating the heading direction using normal flow
International Journal of Computer Vision
The Geometry of Multiple Images: The Laws That Govern The Formation of Images of A Scene and Some of Their Applications
Artificial Vision for Mobile Robots: Stereo Vision and Multisensory Perception
Artificial Vision for Mobile Robots: Stereo Vision and Multisensory Perception
Active estimation of distance in a robotic system that replicates human eye movement
Robotics and Autonomous Systems
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Fast Bilinear Structure from Motion Algorithm Using a Video Sequence and Inertial Sensors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Under natural viewing conditions, humans are not aware of continually performing small head and eye movements in the periods in between voluntary relocations of gaze. It has been recently shown that these fixational head movements provide useful depth information in the form of parallax. Here, we replicate this coordinated head and eye movements in a humanoid robot and describe a method for extracting the resulting depth information. Proprioceptive signals are interpreted by means of a kinematic model of the robot to compute the velocity of the camera. The resulting signal is then optimally integrated with the optic flow to estimate depth in the scene. We present the results of simulations which validate the proposed approach.