Multimodal reference resolution for mobile spatial interaction in urban environments

  • Authors:
  • Mohammad Mehdi Moniri;Christian Müller

  • Affiliations:
  • Automotive Group, German Research Center for Artificial Intelligence (DFKI), Saarbrücken, Germany;Automotive Group, German Research Center for Artificial Intelligence (DFKI) & Action Line Intelligent Transportation Systems, EIT ICT lab, Germany

  • Venue:
  • Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present results of a study on referring to the outside environment from within a moving vehicle. Reference resolution is the first necessary step in integrating the outside environment into the interactive system in the car. It is the problem of finding out which of the objects outside the users is interested in. In our study, we explored eye gaze, head pose, pointing gesture with a smart phone, and the user's view field. We implemented and tested everything in a moving vehicle in a real-life traffic. For safety reasons, the front-seat passenger used the system while the driver was concentrating completely on driving. For analysis and visualization of the user's interaction with the environment, 528 buildings of the city were modeled in 2.5D by using an airborne LIDAR scan, Google Earth, and a spatial database. As a result of our study, we propose in this paper a new algorithm for spatial reference resolution together with a scanning mechanism.