Neural networks and the bias/variance dilemma
Neural Computation
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Global Multiobjective Optimization Using Evolutionary Algorithms
Journal of Heuristics
Multicriteria Optimization
Multi-Objective Machine Learning (Studies in Computational Intelligence) (Studies in Computational Intelligence)
An overview of evolutionary algorithms in multiobjective optimization
Evolutionary Computation
A multi-objective approach to RBF network learning
Neurocomputing
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Exploiting the trade-off — the benefits of multiple objectives in data clustering
EMO'05 Proceedings of the Third international conference on Evolutionary Multi-Criterion Optimization
Pareto-Based Multiobjective Machine Learning: An Overview and Case Studies
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
IEEE Transactions on Neural Networks
Multi-objective hybrid evolutionary algorithms for radial basis function neural network design
Knowledge-Based Systems
A computational geometry approach for pareto-optimal selection of neural networks
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Information Sciences: an International Journal
Hi-index | 0.01 |
Most of modern multi-objective machine learning methods are based on evolutionary optimization algorithms. They are known to be global convergent, however, usually deliver nondeterministic results. In this work we propose the deterministic global solution to a multi-objective problem of supervised learning with the methodology of nonlinear programming. As the result, the proposed multi-objective algorithm performs a global search of Pareto-optimal hypotheses in the space of RBF networks, determining their weights and basis functions. In combination with the Akaike and Bayesian information criteria, the algorithm demonstrates a high generalization efficiency on several synthetic and real-world benchmark problems.