Machine Learning
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Class prediction and discovery using gene expression data
RECOMB '00 Proceedings of the fourth annual international conference on Computational molecular biology
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Leave-one-out bounds for kernel methods
Neural Computation
Semi-Supervised Learning on Riemannian Manifolds
Machine Learning
Model Selection for Regularized Least-Squares Algorithm in Learning Theory
Foundations of Computational Mathematics
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
On Learning Vector-Valued Functions
Neural Computation
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Estimation of Gradients and Coordinate Covariation in Classification
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Gradient learning in a classification setting by gradient descent
Journal of Approximation Theory
Hermite learning with gradient data
Journal of Computational and Applied Mathematics
Variable weighted learning algorithm and its convergence rate
ICNC'09 Proceedings of the 5th international conference on Natural computation
Learning Gradients: Predictive Models that Infer Geometry and Statistical Dependence
The Journal of Machine Learning Research
Learning gradients via an early stopping gradient descent method
Journal of Approximation Theory
Learning gradients with gaussian processes
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
Refinement of operator-valued reproducing kernels
The Journal of Machine Learning Research
Learning the coordinate gradients
Advances in Computational Mathematics
Full length article: The convergence rate of a regularized ranking algorithm
Journal of Approximation Theory
Hi-index | 0.00 |
We introduce an algorithm that learns gradients from samples in the supervised learning framework. An error analysis is given for the convergence of the gradient estimated by the algorithm to the true gradient. The utility of the algorithm for the problem of variable selection as well as determining variable covariance is illustrated on simulated data as well as two gene expression data sets. For square loss we provide a very efficient implementation with respect to both memory and time.