Artificial neural networks: a science in trouble
ACM SIGKDD Explorations Newsletter
High Performance of Artificial Neural Network for Resolving Ambiguous Nucleotide Problem
IPDPS '05 Proceedings of the 19th IEEE International Parallel and Distributed Processing Symposium (IPDPS'05) - Workshop 13 - Volume 14
International Journal of Business Intelligence and Data Mining
An Empirical Comparison of Training Algorithms for Radial Basis Functions
IWANN '03 Proceedings of the 7th International Work-Conference on Artificial and Natural Neural Networks: Part II: Artificial Neural Nets Problem Solving Methods
A multitask learning model for online pattern recognition
IEEE Transactions on Neural Networks
Radial basis function neural network based approach to test oracle
ACM SIGSOFT Software Engineering Notes
Gradient descent and radial basis functions
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
An experimental study on training radial basis functions by gradient descent
ANNPR'06 Proceedings of the Second international conference on Artificial Neural Networks in Pattern Recognition
Training RBFs networks: a comparison among supervised and not supervised algorithms
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Digital Investigation: The International Journal of Digital Forensics & Incident Response
Reformulating Learning Vector Quantization and Radial Basis Neural Networks
Fundamenta Informaticae
Hi-index | 0.00 |
This paper presents a new learning theory (a set of principles for brain-like learning) and a corresponding algorithm for the neural-network field. The learning theory defines computational characteristics that are much more brain-like than that of classical connectionist learning. Robust and reliable learning algorithms would result if these learning principles are followed rigorously when developing neural-network algorithms. This paper also presents a new algorithm for generating radial basis function (RBF) nets for function approximation. The design of the algorithm is based on the proposed set of learning principles. The net generated by this algorithm is not a typical RBF net, but a combination of “truncated” RBF and other types of hidden units. The algorithm uses random clustering and linear programming (LP) to design and train this “mixed” RBF net. Polynomial time complexity of the algorithm is proven and computational results are provided for the well known Mackey-Glass chaotic time series problem, the logistic map prediction problem, various neuro-control problems, and several time series forecasting problems. The algorithm can also be implemented as an online adaptive algorithm