Where to Look? Automating Attending Behaviors of Virtual Human Characters

  • Authors:
  • Sonu Chopra Khullar;Norman I. Badler

  • Affiliations:
  • Computer and Information Science Department, University of Pennsylvania schopra@graphics.cis.upenn.edu;Computer and Information Science Department, University of Pennsylvania badler@central.cis.upenn.edu

  • Venue:
  • Autonomous Agents and Multi-Agent Systems
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

This research proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling) and a proposed method of how these types interact.