Contextual Learning in the Selective Attention for Identification model (CL-SAIM): Modeling contextual cueing in visual search tasks

  • Authors:
  • Andreas Backhaus;Dietmar Heinke;Glyn W. Humphreys

  • Affiliations:
  • University of Birmingham;University of Birmingham;University of Birmingham

  • Venue:
  • CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Visual search is a commonly-used paradigm in psychological studies of attention. It is well-known that search efficiency is influenced by a broad range of factors, e.g. the featural similarity between targets and distractors [4] or the featural configuration (see [16] for a review). Recently, a series of paper by Chun and colleagues (see [1] for a review) has established a new factor that influences search termed 驴contextual cueing驴: visual search is more efficient when targets and distractors are repeated in the same locations across trials, compared with when they fall in new locations. In order to simulate this effect we extended the Selective Attention for Identification model (SAIM [5, 7]) with a mechanism for contextual learning (CL-SAIM). The learning mechanism is based on a Hop field pattern memory with asymmetric weights. This memory module integrates two functions: On one hand it stores the spatial configuration of search displays, and on the other it improves target detection for already seen displays. In this paper we will demonstrate that this relatively simple extension of SAIM is cable of simulating the experimental findings by [2].