Decision theoretic generalizations of the PAC model for neural net and other learning applications
Information and Computation
Approximation and Estimation Bounds for Artificial Neural Networks
Machine Learning - Special issue on computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Model Selection and Error Estimation
Machine Learning
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Learning Rates of Least-Square Regularized Regression
Foundations of Computational Mathematics
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
The performance bounds of learning machines based on exponentially strongly mixing sequences
Computers & Mathematics with Applications
Learning with sample dependent hypothesis spaces
Computers & Mathematics with Applications
Estimation of a regression function by maxima of minima of linear functions
IEEE Transactions on Information Theory
Neural Network Learning: Theoretical Foundations
Neural Network Learning: Theoretical Foundations
Optimal learning rates for least squares regularized regression with unbounded sampling
Journal of Complexity
Efficient agnostic learning of neural networks with bounded fan-in
IEEE Transactions on Information Theory - Part 2
Universal approximation bounds for superpositions of a sigmoidal function
IEEE Transactions on Information Theory
Radial basis function networks and complexity regularization in function learning
IEEE Transactions on Neural Networks
Hi-index | 0.09 |
The fitting to data by splines has long been known to improve dramatically if the knots can be adjusted adaptively. To demonstrate the quality of the obtained free knot spline, it is essential to characterize its generalization ability. By utilizing the powerful techniques of the empirical process and approximation theory to address the estimation and approximation error bounds, respectively, the generalization ability of the free knot spline learning strategy is successfully characterized. We show that the Pseudo-dimension of free knot splines is essentially a linear function of the number of knots. A class of rather general loss functions is considered here and the squared loss is specially treated for its excellent property. We also provide some numerical results to demonstrate the utility of these theoretical results in guiding the process of choosing the appropriate knot numbers through the training data to avoid the overfitting/underfitting problem.