An Attentive Machine Interface Using Geo-Contextual Awareness for Mobile Vision Tasks

  • Authors:
  • Katrin Amlacher;Lucas Paletta

  • Affiliations:
  • JOANNEUM RESEARCH Forschungsgesellschaft mbH, Institute of Digital Image Processing, Wastiangasse 6, 8010 Graz, Austria, email: katrin.amlacher@joanneum.at;JOANNEUM RESEARCH Forschungsgesellschaft mbH, Institute of Digital Image Processing, Wastiangasse 6, 8010 Graz, Austria, email: lucas.paletta@joanneum.at

  • Venue:
  • Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The presented work settles attention in the architecture of ambient intelligence, in particular, for the application of mobile vision tasks in multimodal interfaces. A major issue for the performance of these services is uncertainty in the visual information which roots in the requirement to index into a huge amount of reference images. We propose a system implementation --the Attentive Machine Interface (AMI) --that enables contextual processing of multi-sensor information in a probabilistic framework, for example to exploit contextual information from geo-services with the purpose to cut down the visual search space into a subset of relevant object hypotheses. We present a proof-of-concept with results from bottom-up information processing from experimental tracks and image capture in an urban scenario, extracting object hypotheses in the local context from both (i) mobile image based appearance and (ii) GPS based positioning, and verify performance in recognition accuracy ( 10%) using Bayesian decision fusion. Finally, we demonstrate that top-down information processing --geo-information priming the recognition method in feature space --can yield even better results ( 13%) and more economic computing, verifying the advantage of multi-sensor attentive processing in multimodal interfaces.