Auxiliary object knowledge influences visually-guided interception behavior

  • Authors:
  • Peter W. Battaglia;Paul R. Schrater;Daniel J. Kersten

  • Affiliations:
  • University of Minnesota;University of Minnesota;University of Minnesota

  • Venue:
  • APGV '05 Proceedings of the 2nd symposium on Applied perception in graphics and visualization
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work investigated how humans integrate visual information with object knowledge for interception behavior. When attempting to intercept a moving object using only monocular visual information, the optimal interception position may be ambiguous-the observer may be viewing a small object that is near or a large object that is far away. Regardless, humans are quite adept at monocular interception so it is likely that additional information is incorporated to disambiguate the visual information. We hypothesize that object size information is integrated to accomplish this disambiguation. This sort of auxiliary information integration is well-defined by a Bayesian model of information propagation. We derived a Bayesian model that represents scene attributes relevant to intercepting an object and relations among these attributes. Our model combines sensory measurements with prior scene knowledge to infer an object's position. To test our model we asked participants to intercept a moving ball in virtual reality. In some trials participants were able to see and touch the ball before intercepting it, in others they were only able to see it. When allowed to touch the ball, participants showed improved interception performance. Effectively, they discounted the variation in image size that was caused by variation in object size to obtain more accurate knowledge of object distance. This discounting is consistent with Bayesian information propagation and confirms our hypothesis that human participants use Bayesian inference to estimate an object's distance for interception.