Word learning by a extended BAM network

  • Authors:
  • Qinghua Chen;Kai Liu;Fukang Fang

  • Affiliations:
  • Department of Systems Science, School of Management, Beijing Normal University, Beijing, P.R.China;Department of Systems Science, School of Management, Beijing Normal University, Beijing, P.R.China;Institute of Non-equilibrium Systems and State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, P.R.China

  • Venue:
  • ICNC'09 Proceedings of the 5th international conference on Natural computation
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Word learning has been a hot issue in cognitive science for many years. So far there are mainly two theories on it, hypothesis elimination and associative learning, yet none of them could explain the recognized experiments approvingly. By integrating advantages of these two approaches, a Bayesian inference framework was proposed recently, which fits some important experiments much better, though its algorithm is somewhat too complicated. Here we propose an extended BAM model which needs only simple calculation but is well consistent with the experiment data of how brain learns a word's meaning from just one or only a few positive examples and responses properly to different amounts of samples as well as samples from different spans, which might provide a new and promising approach to the scholars on word learning.