Neural Computation
A practical Bayesian framework for backpropagation networks
Neural Computation
Floating search methods in feature selection
Pattern Recognition Letters
The nature of statistical learning theory
The nature of statistical learning theory
Feature Selection: Evaluation, Application, and Small Sample Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Comparison of approximate methods for handling hyperparameters
Neural Computation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Machine Learning
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Healing the relevance vector machine through augmentation
ICML '05 Proceedings of the 22nd international conference on Machine learning
A Unifying View of Sparse Approximate Gaussian Process Regression
The Journal of Machine Learning Research
Regularization in the selection of radial basis function centers
Neural Computation
Matching pursuits with time-frequency dictionaries
IEEE Transactions on Signal Processing
Orthogonal least squares learning algorithm for radial basis function networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
A comparative study is carried out in the problem of selecting a subset of basis functions in regression tasks. The emphasis is put on practical requirements, such as the sparsity of the solution or the computational effort. A distinction is made according to the implicit or explicit nature of the selection process. In explicit selection methods the basis functions are selected from a set of candidates with a search process. In implicit methods a model with all the basis functions is considered and the model parameters are computed in such a way that several of them become zero. The former methods have the advantage that both the sparsity and the computational effort can be controlled. We build on earlier work on Bayesian interpolation to design efficient methods for explicit selection guided by model evidence, since there is strong indication that the evidence prefers simple models that generalize fairly well. Our experimental results indicate that very similar results between implicit and explicit methods can be obtained regarding generalization performance. However, they make use of different numbers of basis functions and are obtained at very different computational costs. It is also reported that the models with the highest evidence are not necessarily those with the best generalization performance.