Evolving efficient sensor arrangement and obstacle avoidance control logic for a miniature robot

  • Authors:
  • Muthukumaran Chandrasekaran;Karthik Nadig;Khaled Rasheed

  • Affiliations:
  • Institute for Artificial Intelligence, University of Georgia, Athens, GA;Institute for Artificial Intelligence, University of Georgia, Athens, GA;Computer Science Department, University of Georgia, Athens, GA

  • Venue:
  • IEA/AIE'11 Proceedings of the 24th international conference on Industrial engineering and other applications of applied intelligent systems conference on Modern approaches in applied intelligence - Volume Part II
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evolutionary computation techniques are being frequently used in the field of robotics to develop controllers for autonomous robots. In this paper, we evaluate the use of Genetic Programming (GP) to evolve a controller that implements an Obstacle Avoidance (OA) behavior in a miniature robot. The GP system generates the OA logic equation offline on a simulated dynamic 2-D environment that transforms the sensory inputs from a simulated robot to a controller decision. The goodness of the generated logic equation is computed by using a fitness function that maximizes the exploration of the environment and minimizes the number of collisions for a fixed number of decisions allowed before the simulation is stopped. The set of motor control decisions for all possible sensor trigger sequences is applied to a real robot which is then tested on a real environment. Needless to say, the efficiency of this OA robot depends on the information it can receive from its surroundings. This information is dependant on the sensor module design. Thus, we also present a Genetic Algorithm (GA) that evolves a sensor arrangement taking into consideration economical issues as well as the usefulness of the information that can be retrieved. The evolved algorithm shows robust performance even if the robot was placed in completely different dynamically changing environments. The performance of our algorithm is compared with that of a hybrid neural network and also with an online (real time) evolution method.