Single robot - Multiple human interaction via intelligent user interfaces

  • Authors:
  • Santosh Kumar;Ali Sekmen

  • Affiliations:
  • Department of Electrical and Computer Engineering, Tennessee State University, Nashville, TN 37209, USA;Department of Computer Science, Tennessee State University, 3500 John A. Merritt Boulevard, Nashville, TN 37209, USA

  • Venue:
  • Knowledge-Based Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This project addresses some research issues concerning design of intelligent user interfaces for improving human-robot interaction. In some critical applications, users interact with robots via Graphical User Interfaces (GUIs), which usually contain standard components considering a large number of users. Some of these user interface components may be redundant and sometimes confusing for some users depending on their preferences, capabilities, and the context robots are used in. This paper describes an adaptive system that enables a mobile robot to learn its users' preferences and capabilities so that it can offer a dynamic and efficient GUI for each user rather than a standard GUI for all users. The system predicts future actions of the users by generating models based on the users' previous interactions with the robot. The system was implemented and evaluated on a Pioneer 3-AT mobile robot. About 20 participants who were assessed on spatial ability directed the robot in simple spatial navigation tasks to evaluate effectiveness of the adaptive interface. Time to complete the task, the number of steps, and the number of errors were collected. The results showed that although spatial reasoning ability plays an important role in mobile robot navigation, it is less important in the robot control with adaptive interfaces compared to that of the non-adaptive.