Neural networks for pattern recognition
Neural networks for pattern recognition
Artificial Intelligence Review - Special issue on lazy learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
A fast algorithm for tracking human faces based on chromatic histograms
Pattern Recognition Letters
Machine Learning
Scalable Techniques from Nonparametric Statistics for Real Time Robot Learning
Applied Intelligence
Appearance-Based Obstacle Detection with Monocular Color Vision
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
The simulated evolution of robot perception
The simulated evolution of robot perception
Elastic Translation Invariant Matching of Trajectories
Machine Learning
6D SLAM—3D mapping outdoor environments: Research Articles
Journal of Field Robotics
Visual Navigation for Mobile Robots: A Survey
Journal of Intelligent and Robotic Systems
Towards semantic maps for mobile robots
Robotics and Autonomous Systems
A survey of robot learning from demonstration
Robotics and Autonomous Systems
Pure reactive behavior learning using Case Based Reasoning for a vision based 4-legged robot
Robotics and Autonomous Systems
Hi-index | 0.00 |
The manual design of vision based robotic behaviors remains a substantial challenge due to the complexity of visual information. It takes an ample effort to draw meaningful relationships and constraints between the acquired image perception and the geometry of environment both empirically and programmatically. This contribution proposes an alternative framework for learning autonomous visual navigation behavior from demonstration examples by integrating 3D range and an omnidirectional camera. A programming by demonstration approach is utilized to learn the demonstrated trajectories as a mapping between visual features computed on the omnidirectional image onto a corresponding robot motion. Exhaustive tests are performed to identify the discriminant features in order to mimic the teacher demonstrations. The relationship between perception and action is learned from demonstrations by means of locally weighted regression and artificial neural networks. The experimental results on the mobile robot indicate that the acquired visual behavior is robust and is able to generalize and optimize its performance to environments not presented during training.