Modeling embodied visual behaviors

  • Authors:
  • Nathan Sprague;Dana Ballard;Al Robinson

  • Affiliations:
  • Kalamazoo College, Kalamazoo College, Michigan;University of Rochester, Rochester, New York;University of Rochester, Rochester, New York

  • Venue:
  • ACM Transactions on Applied Perception (TAP)
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

To make progess in understanding human visuomotor behavior, we will need to understand its basic components at an abstract level. One way to achieve such an understanding would be to create a model of a human that has a sufficient amount of complexity so as to be capable of generating such behaviors. Recent technological advances have been made that allow progress to be made in this direction. Graphics models that simulate extensive human capabilities can be used as platforms from which to develop synthetic models of visuomotor behavior. Currently, such models can capture only a small portion of a full behavioral repertoire, but for the behaviors that they do model, they can describe complete visuomotor subsystems at a useful level of detail. The value in doing so is that the body's elaborate visuomotor structures greatly simplify the specification of the abstract behaviors that guide them. The net result is that, essentially, one is faced with proposing an embodied “operating system” model for picking the right set of abstract behaviors at each instant. This paper outlines one such model. A centerpiece of the model uses vision to aid the behavior that has the most to gain from taking environmental measurements. Preliminary tests of the model against human performance in realistic VR environments show that main features of the model show up in human behavior.