Modeling Unsupervised Perceptual Category Learning

  • Authors:
  • B. M. Lake;G. K. Vallabha;J. L. McClelland

  • Affiliations:
  • Dept. of Psychol., Stanford Univ., Stanford, CA;-;-

  • Venue:
  • IEEE Transactions on Autonomous Mental Development
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

During the learning of speech sounds and other perceptual categories, category labels are not provided, the number of categories is unknown, and the stimuli are encountered sequentially. These constraints provide a challenge for models, but they have been recently addressed in the online mixture estimation model of unsupervised vowel category learning (see Vallabha in the reference section). The model treats categories as Gaussian distributions, proposing both the number and the parameters of the categories. While the model has been shown to successfully learn vowel categories, it has not been evaluated as a model of the learning process. We account for several results: acquired distinctiveness between categories and acquired similarity within categories, a faster increase in discrimination for more acoustically dissimilar vowels, and gradual unsupervised learning of category structure in simple visual stimuli.