Diffusion Kernels on Graphs and Other Discrete Input Spaces
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Transductive Inference for Text Classification using Support Vector Machines
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Convex Optimization
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Nonlinear Dimensionality Reduction with Local Spline Embedding
IEEE Transactions on Knowledge and Data Engineering
Transductive Classification via Dual Regularization
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Flexible manifold embedding: a framework for semi-supervised and unsupervised dimension reduction
IEEE Transactions on Image Processing
Multi-Class L2,1-Norm Support Vector Machine
ICDM '11 Proceedings of the 2011 IEEE 11th International Conference on Data Mining
ICCV '11 Proceedings of the 2011 International Conference on Computer Vision
Hi-index | 0.00 |
The semi-supervised learning usually only predict labels for unlabeled data appearing in training data, and cannot effectively predict labels for testing data never appearing in training set. To handle this out-of-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint is too rigid to capture the manifold structure of data. Motivated by this deficiency, we relax the rigid linear embedding constraint and propose to use an elastic embedding constraint on the predicted label matrix such that the manifold structure can be better explored. To solve our new objective and also a more general optimization problem, we study a novel adaptive loss with efficient optimization algorithm. Our new adaptive loss minimization method takes the advantages of both L1 norm and L2 norm, and is robust to the data outlier under Laplacian distribution and can efficiently learn the normal data under Gaussian distribution. Experiments have been performed on image classification tasks and our approach outperforms other state-of-the-art methods.