Robotic Target Tracking with Approximation Space-Based Feedback During Reinforcement Learning

  • Authors:
  • Daniel Lockery;James F. Peters

  • Affiliations:
  • Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba R3T 5V6, Canada;Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba R3T 5V6, Canada

  • Venue:
  • RSFDGrC '07 Proceedings of the 11th International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a method of target tracking for a robotic vision system employing reinforcement learning with feedback based on average rough coverage performance values. The application is for a line-crawling inspection robot (ALiCE II, the second revision of Automated Line Crawling Equipment) designed to automate the inspection of hydro electric transmission lines and related equipment. The problem considered in this paper is how to train the vision system to track targets of interest and acquire useful images for further analysis. To train the system, two versions of Watkins' Q-learning were implemented, the classical single-step version and a modified strain using an approximation space-based form of what we term rough feedback. The robot is briefly described along with experimental results for the two forms of the Q-learning control algorithm. The contribution of this article is an introduction to a modified version of Q-learning control with rough feedback to monitor and adjust the learning rate during target tracking.