A practical approach for position control of a robotic manipulator using a radial basis function network and a simple vision system

  • Authors:
  • Bach H. Dinh;Matthew W. Dunnigan;Donald S. Reay

  • Affiliations:
  • Electrical, Electronic & Computer Engineering, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK;Electrical, Electronic & Computer Engineering, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK;Electrical, Electronic & Computer Engineering, School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, UK

  • Venue:
  • WSEAS Transactions on Systems and Control
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a new practical approach using a RBFN (Radial Basis Function Network) to approximate the inverse kinematics function of a robot manipulator. It can be effectively applied for position control of a real robot-vision system in which robot movement in the workspace is observed by a camera. In fact, there are several traditional methods based on the known geometry of the manipulator to determine the relationship between the joint variable space and the world coordinate space. However, these traditional methods are impractical if the manipulator geometry cannot be determined easily, a robot-vision system for example. Therefore, a neural network with its inherent learning ability can be an effective alternative solution for the inverse kinematics problem. In this paper, an approach using a RBFN with predefined centres in the hidden layer (distributed regularly in the workspace) and a combination of the strict interpolation method and the LMS (Least Mean Square) algorithm is presented for effective learning of the inverse kinematic function. By using the strict interpolation method and constrained training data an appropriate approximation of the inverse kinematic function can be produced. However, this solution has the difficulty of how to collect the constrained training patterns whose inputs are selected at pre-defined positions in the workspace. Additionally, the LMS algorithm can incrementally update the linear output-layer weights through an on-line training process. Thus, the proposed idea of combining these techniques can produce the advantages of both methods to deal with the difficulties in practical applications, such as the sensitive structure of a real robot-vision system or a realistic situation where the initial setup and application environments are different. To verify the performance of the proposed approach, practical experiments have been performed using a Mitsubishi PA10-6CE manipulator observed by a webcam. All application programmes, such as robot servo control, neural network, and image processing were written in C/C++ and run in a real-time robotic system. The experimental results prove that the proposed approach is effective.