Randomized Hough transform (RHT): basic mechanisms, algorithms, and computational complexities
CVGIP: Image Understanding
Regularization theory and neural networks architectures
Neural Computation
Training with noise is equivalent to Tikhonov regularization
Neural Computation
Bayesian Ying-Yang machine, clustering and number of clusters
Pattern Recognition Letters - special issue on pattern recognition in practice V
Neural, Parallel & Scientific Computations
BYY harmony learning, structural RPCL, and topological self-organizing on mixture models
Neural Networks - New developments in self-organizing maps
Temporal BYY learning for state space approach, hidden Markovmodel, and blind source separation
IEEE Transactions on Signal Processing
BYY harmony learning, independent state space, and generalized APT financial analyses
IEEE Transactions on Neural Networks
A comparative investigation on subspace dimension determination
Neural Networks - 2004 Special issue: New developments in self-organizing systems
A Comparative Study on Data Smoothing Regularization for Local Factor Analysis
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Bayesian Ying Yang system, best harmony learning, and Gaussian manifold based family
WCCI'08 Proceedings of the 2008 IEEE world conference on Computational intelligence: research frontiers
Hi-index | 0.01 |
First, we briefly introduce the basic idea of data smoothing regularization, which was firstly proposed by Xu [Brain-like computing and intelligent information systems (1997) 241] for parameter learning in a way similar to Tikhonov regularization but with an easy solution to the difficulty of determining an appropriate hyper-parameter. Also, the roles of this regularization are demonstrated on Gaussian-mixture via smoothed versions of the EM algorithm, the BYY model selection criterion, adaptive harmony algorithm as well as its related Rival penalized competitive learning. Second, these studies are extended to a mixture of reconstruction errors of Gaussian types, which provides a new probabilistic formulation for the multi-sets learning approach [Proc. IEEE ICNN94 I (1994) 315] that learns multiple objects in typical geometrical structures such as points, lines, hyperplanes, circles, ellipses, and templates of given shapes. Finally, insights are provided on three problem solving strategies, namely the competition-penalty adaptation based learning, the global evidence accumulation based selection, and the guess-test based decision, with a general problem solving paradigm suggested.