Machine Learning
An equivalence between sparse approximation and support vector machines
Neural Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Everything old is new again: a fresh look at historical approaches in machine learning
Everything old is new again: a fresh look at historical approaches in machine learning
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Semi-Supervised Learning on Riemannian Manifolds
Machine Learning
Learning Appearance Manifolds from Video
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Beyond the point cloud: from transductive to semi-supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Maximum entropy distribution estimation with generalized regularization
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Unifying divergence minimization and statistical inference via convex duality
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Learning convex combinations of continuously parameterized basic kernels
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Discriminatively regularized least-squares classification
Pattern Recognition
An efficient algorithm for learning to rank from preference graphs
Machine Learning
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
A unifying view of multiple kernel learning
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Sparse Semi-supervised Learning Using Conjugate Functions
The Journal of Machine Learning Research
lp-Norm Multiple Kernel Learning
The Journal of Machine Learning Research
Prediction-based regularization using data augmented regression
Statistics and Computing
Efficient training of graph-regularized multitask SVMs
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
Efficient gradient descent algorithm for sparse models with application in learning-to-rank
Knowledge-Based Systems
Toward supervised anomaly detection
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Regularization is an approach to function learning that balances fit and smoothness. In practice, we search for a function f with a finite representation f = Σi ci φi(·). In most treatments, the ci are the primary objects of study. We consider value regularization, constructing optimization problems in which the predicted values at the training points are the primary variables, and therefore the central objects of study. Although this is a simple change, it has profound consequences. From convex conjugacy and the theory of Fenchel duality, we derive separate optimality conditions for the regularization and loss portions of the learning problem; this technique yields clean and short derivations of standard algorithms. This framework is ideally suited to studying many other phenomena at the intersection of learning theory and optimization. We obtain a value-based variant of the representer theorem, which underscores the transductive nature of regularization in reproducing kernel Hilbert spaces. We unify and extend previous results on learning kernel functions, with very simple proofs. We analyze the use of unregularized bias terms in optimization problems, and low-rank approximations to kernel matrices, obtaining new results in these areas. In summary, the combination of value regularization and Fenchel duality are valuable tools for studying the optimization problems in machine learning.