Learning with generalization capability by kernal methods of bounded complexity
Journal of Complexity
Estimates of Data Complexity in Neural-Network Learning
SOFSEM '07 Proceedings of the 33rd conference on Current Trends in Theory and Practice of Computer Science
Estimates of Network Complexity and Integral Representations
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Accuracy of suboptimal solutions to kernel principal component analysis
Computational Optimization and Applications
An integral upper bound for neural network approximation
Neural Computation
Model Complexity of Neural Networks and Integral Transforms
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Smooth Optimal Decision Strategies for Static Team Optimization Problems and Their Approximations
SOFSEM '10 Proceedings of the 36th Conference on Current Trends in Theory and Practice of Computer Science
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
On tractability of neural-network approximation
ICANNGA'09 Proceedings of the 9th international conference on Adaptive and natural computing algorithms
Computational Optimization and Applications
Functional Optimization Through Semilocal Approximate Minimization
Operations Research
Suboptimal Solutions to Team Optimization Problems with Stochastic Information Structure
SIAM Journal on Optimization
A theoretical framework for supervised learning from regions
Neurocomputing
Hi-index | 0.00 |
An alternative to the classical Ritz method for approximate optimization is investigated. In the extended Ritz method, sets of admissible solutions are approximated by their intersections with sets of linear combinations of all n-tuples of functions from a given basis. This alternative scheme, called variable-basis approximation, includes functions computable by trigonometric polynomials with free frequencies, free-node splines, neural networks, and other nonlinear approximating families. Estimates of rates of approximate optimization by the extended Ritz method are derived. Upper bounds on rates of convergence of suboptimal solutions to the optimal one are expressed in terms of the degree n of variable-basis functions, the modulus of continuity of the functional to be minimized, the modulus of Tikhonov well-posedness of the problem, and certain norms tailored to the type of basis. The results are applied to convex best approximation and to kernel methods in machine learning.