Visual bootstrapping for unsupervised symbol grounding

  • Authors:
  • Josef Kittler;Mikhail Shevchenko;David Windridge

  • Affiliations:
  • Center for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom;Center for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom;Center for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom

  • Venue:
  • ACIVS'06 Proceedings of the 8th international conference on Advanced Concepts For Intelligent Vision Systems
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Most existing cognitive architectures integrate computer vision and symbolic reasoning. However, there is still a gap between low-level scene representations (signals) and abstract symbols. Manually attaching, i.e. grounding, the symbols on the physical context makes it impossible to expand system capabilities by learning new concepts. This paper presents a visual bootstrapping approach for the unsupervised symbol grounding. The method is based on a recursive clustering of a perceptual category domain controlled by goal acquisition from the visual environment. The novelty of the method consists in division of goals into the classes of parameter goal, invariant goal and context goal. The proposed system exhibits incremental learning in such a manner as to allow effective transferable representation of high-level concepts.