Rapid Concept Learning for Mobile Robots
Machine Learning - Special issue on learning in autonomous robots
Integrating active face tracking with model based coding
Pattern Recognition Letters
Vision for Mobile Robot Navigation: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognition, Resolution, and Complexity of Objects Subject to Affine Transformations
International Journal of Computer Vision
Learning View Graphs for Robot Navigation
Autonomous Robots - Special issue on autonomous agents
Identification of partly destroyed objects using deformable templates
Statistics and Computing
Multi-object Deformable Templates Dedicated to the Segmentation of Brain Deep Structures
MICCAI '98 Proceedings of the First International Conference on Medical Image Computing and Computer-Assisted Intervention
ICDAR '99 Proceedings of the Fifth International Conference on Document Analysis and Recognition
Vision-based global localization and mapping for mobile robots
IEEE Transactions on Robotics
Landmark Selection for Vision-Based Navigation
IEEE Transactions on Robotics
Visual sign information extraction and identification by deformable models for intelligent vehicles
IEEE Transactions on Intelligent Transportation Systems
Statistical deformable model-based segmentation of image motion
IEEE Transactions on Image Processing
LSM: A layer subdivision method for deformable object matching
Information Sciences: an International Journal
Hi-index | 0.00 |
Deformable models have been studied in image analysis over the last decade and used for recognition of flexible or rigid templates under diverse viewing conditions. This article addresses the question of how to define a deformable model for a real-time color vision system for mobile robot navigation. Instead of receiving the detailed model definition from the user, the algorithm extracts and learns the information from each object automatically. How well a model represents the template that exists in the image is measured by an energy function. Its minimum corresponds to the model that best fits with the image and it is found by a genetic algorithm that handles the model deformation. At a later stage, if there is symbolic information inside the object, it is extracted and interpreted using a neural network. The resulting perception module has been integrated successfully in a complex navigation system. Various experimental results in real environments are presented in this article, showing the effectiveness and capacity of the system.