Data smoothing regularization, multi-sets-learning, and problem solving strategies

  • Authors:
  • Lei Xu

  • Affiliations:
  • Department of Computer Science and Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong, China

  • Venue:
  • Neural Networks - 2003 Special issue: Advances in neural networks research — IJCNN'03
  • Year:
  • 2003

Quantified Score

Hi-index 0.01

Visualization

Abstract

First, we briefly introduce the basic idea of data smoothing regularization, which was firstly proposed by Xu [Brain-like computing and intelligent information systems (1997) 241] for parameter learning in a way similar to Tikhonov regularization but with an easy solution to the difficulty of determining an appropriate hyper-parameter. Also, the roles of this regularization are demonstrated on Gaussian-mixture via smoothed versions of the EM algorithm, the BYY model selection criterion, adaptive harmony algorithm as well as its related Rival penalized competitive learning. Second, these studies are extended to a mixture of reconstruction errors of Gaussian types, which provides a new probabilistic formulation for the multi-sets learning approach [Proc. IEEE ICNN94 I (1994) 315] that learns multiple objects in typical geometrical structures such as points, lines, hyperplanes, circles, ellipses, and templates of given shapes. Finally, insights are provided on three problem solving strategies, namely the competition-penalty adaptation based learning, the global evidence accumulation based selection, and the guess-test based decision, with a general problem solving paradigm suggested.