Visual tracking of known three-dimensional objects
International Journal of Computer Vision
Robust model-based motion tracking through the integration of search and estimation
International Journal of Computer Vision
Active vision for reliable ranging: cooperating focus, stereo, and vergence
International Journal of Computer Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision
Data and Model-Driven Selection Using Color Regions
International Journal of Computer Vision
Pfinder: Real-Time Tracking of the Human Body
IEEE Transactions on Pattern Analysis and Machine Intelligence
X Vision: a portable substrate for real-time vision applications
Computer Vision and Image Understanding
Robust Motion Estimation Using Chrominance Information in Color Image Sequences
ICIAP '97 Proceedings of the 9th International Conference on Image Analysis and Processing-Volume I - Volume I
On the Use of Topological Constraints Within Object Recognition Tasks
ICPR '96 Proceedings of the 1996 International Conference on Pattern Recognition (ICPR '96) Volume I - Volume 7270
If at first you don't succeed...
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
A System to Navigate a Robot into a Ship Structure
ICVS '01 Proceedings of the Second International Workshop on Computer Vision Systems
Dynamic Aspects of Visual Servoing and a Framework for Real-Time 3D Vision for Robotics
Revised Papers from the International Workshop on Sensor Based Intelligent Robots
Hi-index | 0.00 |
Vision-based control of motion can only be feasible if vision provides reliable control signals and when full system integration is achieved. In this paper we will address these two issues. A modular system architecture is built up around the basic primitives of object tracking, the features of the object. The initialisation is partly automated by using search functions to describe the task. The features found and tracked in the image are contained in a wire-frame model of the object as seen in the image. This model is used for feature tracking and continuous pose determination. Of particular need is a method of robust feature tracking. This is achieved using EPIC, a method of Edge-Projected Integration of Cues. A demonstration shows how the robot follows the pose of an object moved by hand in common room lighting at frame rate using a PC.