Machine Learning
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Class prediction and discovery using gene expression data
RECOMB '00 Proceedings of the fourth annual international conference on Computational molecular biology
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
On Learning Vector-Valued Functions
Neural Computation
Learning Coordinate Covariances via Gradients
The Journal of Machine Learning Research
Derivative reproducing properties for kernel methods in learning theory
Journal of Computational and Applied Mathematics
Parzen windows for multi-class classification
Journal of Complexity
Gradient learning in a classification setting by gradient descent
Journal of Approximation Theory
Hermite learning with gradient data
Journal of Computational and Applied Mathematics
Approximating gradients for meshes and point clouds via diffusion metric
SGP '09 Proceedings of the Symposium on Geometry Processing
Online Learning with Samples Drawn from Non-identical Distributions
The Journal of Machine Learning Research
Learning Gradients: Predictive Models that Infer Geometry and Statistical Dependence
The Journal of Machine Learning Research
Learning gradients via an early stopping gradient descent method
Journal of Approximation Theory
Learning gradients with gaussian processes
PAKDD'10 Proceedings of the 14th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining - Volume Part II
Refinement of operator-valued reproducing kernels
The Journal of Machine Learning Research
Learning the coordinate gradients
Advances in Computational Mathematics
Hi-index | 0.00 |
We introduce an algorithm that simultaneously estimates a classification function as well as its gradient in the supervised learning framework. The motivation for the algorithm is to find salient variables and estimate how they covary. An efficient implementation with respect to both memory and time is given. The utility of the algorithm is illustrated on simulated data as well as a gene expression data set. An error analysis is given for the convergence of the estimate of the classification function and its gradient to the true classification function and true gradient.