Mathematical Programming: Series A and B
Linear Programming Boosting via Column Generation
Machine Learning
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Regularized multi--task learning
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Predictive low-rank decomposition for kernel methods
ICML '05 Proceedings of the 22nd international conference on Machine learning
A DC-programming algorithm for kernel selection
ICML '06 Proceedings of the 23rd international conference on Machine learning
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
Robust Object Recognition with Cortex-Like Mechanisms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluation of the effects of Gabor filter parameters on texture classification
Pattern Recognition
Simplified Gabor wavelets for human face recognition
Pattern Recognition
An Algorithm for Transfer Learning in a Heterogeneous Environment
ECML PKDD '08 Proceedings of the 2008 European Conference on Machine Learning and Knowledge Discovery in Databases - Part I
Convex multi-task feature learning
Machine Learning
Gaussian kernel optimization for pattern classification
Pattern Recognition
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
SIAM Journal on Imaging Sciences
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Accelerated Gradient Method for Multi-task Sparse Learning Problem
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Efficient wavelet adaptation for hybrid wavelet-large margin classifiers
Pattern Recognition
Joint covariate selection and joint subspace selection for multiple classification problems
Statistics and Computing
When Is There a Representer Theorem? Vector Versus Matrix Regularizers
The Journal of Machine Learning Research
l1 regularization in infinite dimensional feature spaces
COLT'07 Proceedings of the 20th annual conference on Learning theory
On the Dual Formulation of Boosting Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition
Machine Learning
Learning convex combinations of continuously parameterized basic kernels
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Foundations and Trends® in Machine Learning
Penalty for Sparse Linear and Sparse Multiple Kernel Multitask Learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We propose a principled framework for learning with infinitely many features, situations that are usually induced by continuously parametrized feature extraction methods. Such cases occur for instance when considering Gabor-based features in computer vision problems or when dealing with Fourier features for kernel approximations. We cast the problem as the one of finding a finite subset of features that minimizes a regularized empirical risk. After having analyzed the optimality conditions of such a problem, we propose a simple algorithm which has the flavour of a column-generation technique. We also show that using Fourier-based features, it is possible to perform approximate infinite kernel learning. Our experimental results on several datasets show the benefits of the proposed approach in several situations including texture classification and large-scale kernelized problems (involving about 100 thousand examples).