Learning by integrating information within and across fixations

  • Authors:
  • Predrag Neskovic;Liang Wu;Leon N Cooper

  • Affiliations:
  • Institute for Brain and Neural Systems and Department of Physics, Brown University, Providence, RI;Institute for Brain and Neural Systems and Department of Physics, Brown University, Providence, RI;Institute for Brain and Neural Systems and Department of Physics, Brown University, Providence, RI

  • Venue:
  • ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part II
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work we introduce a Bayesian Integrate And Shift (BIAS) model for learning object categories. The model is biologically inspired and uses Bayesian inference to integrate information within and across fixations. In our model, an object is represented as a collection of features arranged at specific locations with respect to the location of the fixation point. Even though the number of feature detectors that we use is large, we show that learning does not require a large amount of training data due to the fact that between an object and features we introduce an intermediate representation, object views, and thus reduce the dependence among the feature detectors. We tested the system on four object categories and demonstrated that it can learn a new category from only a few training examples.