The Strength of Weak Learnability
Machine Learning
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Support vector density estimation
Advances in kernel methods
Prediction games and arcing algorithms
Neural Computation
Swarm intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms: Concepts and Designs with Disk
Genetic Algorithms: Concepts and Designs with Disk
Local overfitting control via leverages
Neural Computation
An introduction to boosting and leveraging
Advanced lectures on machine learning
Multivariate Density Estimation: an SVM Approach
Multivariate Density Estimation: an SVM Approach
Estimating the Support of a High-Dimensional Distribution
Neural Computation
Neural Networks
Robust Bayesian mixture modelling
Neurocomputing
Fast orthogonal least squares algorithm for efficient subset modelselection
IEEE Transactions on Signal Processing
Adaptive minimum-BER linear multiuser detection for DS-CDMA signalsin multipath channels
IEEE Transactions on Signal Processing
Sparse modeling using orthogonal forward regression with PRESS statistic and regularization
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Experiments with repeating weighted boosting search for optimization signal processing applications
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Probability density estimation from optimally condensed data samples
IEEE Transactions on Pattern Analysis and Machine Intelligence
Design of distribution independent noise filters with online PDF estimation
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part I
Hi-index | 0.00 |
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of highdimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunablekernel model to effectively construct a very compact density estimate accurately.