Learning faces with the BIAS model: On the importance of the sizes and locations of fixation regions

  • Authors:
  • Predrag Neskovic;Ian Sherman;Liang Wu;Leon N. Cooper

  • Affiliations:
  • Institute for Brain and Neural Systems, Department of Physics, Brown University, Box 1843, Providence, RI 02912, USA;Institute for Brain and Neural Systems, Department of Physics, Brown University, Box 1843, Providence, RI 02912, USA;Institute for Brain and Neural Systems, Department of Physics, Brown University, Box 1843, Providence, RI 02912, USA;Institute for Brain and Neural Systems, Department of Physics, Brown University, Box 1843, Providence, RI 02912, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

During perception of complex objects, the highest density of fixations occurs on the regions that are most salient. For example, when looking at a face, the regions that receive the largest density of fixations are the eyes, the nose, and the mouth. The fact that some regions within an object are more informative than other regions means that a learning system that can acquire this information from a teacher rather than from random fixations can learn faster and likewise recognize faster. An important question, from both the theoretical and practical points of view is: How important are the properties of the fixation regions for the learning system? In this work we consider one such system, the Bayesian integrate and shift (BIAS) model for learning object categories, and investigate its sensitivity to changes in the sizes and locations of fixation regions. We test the model using a face category and show that the learning algorithm is robust to large variations of the regions' sizes and locations. Specifically, we show that the performance is inversely proportional to the sizes of the fixation regions and that the preferred locations are those that are closer to the center of the object.