Robots as interfaces to haptic and locomotor spaces

  • Authors:
  • Vladimir Kulyukin;Chaitanya Gharpure;Cassidy Pentico

  • Affiliations:
  • Utah State University;Utah State University;Utah State University

  • Venue:
  • Proceedings of the ACM/IEEE international conference on Human-robot interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Research on spatial cognition and navigation of the visually impaired suggests that vision may be a primary sensory modality that enables humans to align the egocentric (self to object) and allocentric (object to object) frames of reference in space. In the absence of vision, the frames align best in the haptic space. In the locomotor space, as the haptic space translates with the body, lack of vision causes the frames to misalign, which negatively affects action reliability. In this paper, we argue that robots can function as interfaces to the haptic and locomotor spaces in supermarkets. In the locomotor space, the robot eliminates the necessity of frame alignment and, in or near the haptic space, it cues the shopper to the salient features of the environment sufficient for product retrieval. We present a trichotomous ontology of spaces in a supermarket induced by the presence of a robotic shopping assistant and analyze the results of robot-assisted shopping experiments with ten visually impaired participants conducted in a real supermarket.