Active Information Selection: Visual Attention Through the Hands

  • Authors:
  • Chen Yu;L. B. Smith;Hongwei Shen;A. F. Pereira;T. Smith

  • Affiliations:
  • Psychological & Brain Sci. Dept., Indiana Univ., Bloomington, IN, USA;-;-;-;-

  • Venue:
  • IEEE Transactions on Autonomous Mental Development
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

An important goal in studying both human intelligence and artificial intelligence is to understand how a natural or an artificial learning system deals with the uncertainty and ambiguity of the real world. For a natural intelligence system such as a human toddler, the relevant aspects in a learning environment are only those that make contact with the learner's sensory system. In real-world interactions, what the child perceives critically depends on his own actions as these actions bring information into and out of the learner's sensory field. The present analyses indicate how, in the case of a toddler playing with toys, these perception-action loops may simplify the learning environment by selecting relevant information and filtering irrelevant information. This paper reports new findings using a novel method that seeks to describe the visual learning environment from a young child's point of view and measures the visual information that a child perceives in real-time toy play with a parent. The main results are 1) what the child perceives primarily depends on his own actions but also his social partner's actions; 2) manual actions, in particular, play a critical role in creating visual experiences in which one object dominates; 3) this selecting and filtering of visual objects through the actions of the child provides more constrained and clean input that seems likely to facilitate cognitive learning processes. These findings have broad implications for how one studies and thinks about human and artificial learning systems.