Regularization theory and neural networks architectures
Neural Computation
On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems
Theoretical Computer Science
From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Database for Handwritten Text Recognition Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
On different facets of regularization theory
Neural Computation
Convex Optimization
Semi-Supervised Learning on Riemannian Manifolds
Machine Learning
Acquiring Linear Subspaces for Face Recognition under Variable Lighting
IEEE Transactions on Pattern Analysis and Machine Intelligence
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Robust self-tuning semi-supervised learning
Neurocomputing
Discriminatively regularized least-squares classification
Pattern Recognition
Robust Face Recognition via Sparse Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Low-Cost Pedestrian-Detection System With a Single Optical Camera
IEEE Transactions on Intelligent Transportation Systems
The minimum description length principle in coding and modeling
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
Semi-supervised ensemble classification in subspaces
Applied Soft Computing
Semi-supervised image classification based on sparse coding spatial pyramid matching
Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service
Hi-index | 0.01 |
Manifold regularization (MR) is a promising regularization framework for semi-supervised learning, which introduces an additional penalty term to regularize the smoothness of functions on data manifolds and has been shown very effective in exploiting the underlying geometric structure of data for classification. It has been shown that the performance of the MR algorithms depends highly on the design of the additional penalty term on manifolds. In this paper, we propose a new approach to define the penalty term on manifolds by the sparse representations instead of the adjacency graphs of data. The process to build this novel penalty term has two steps. First, the best sparse linear reconstruction coefficients for each data point are computed by the l^1-norm minimization. Secondly, the learner is subject to a cost function which aims to preserve the sparse coefficients. The cost function is utilized as the new penalty term for regularization algorithms. Compared with previous semi-supervised learning algorithms, the new penalty term needs less input parameters and has strong discriminative power for classification. The least square classifier using our novel penalty term is proposed in this paper, which is called the Sparse Regularized Least Square Classification (S-RLSC) algorithm. Experiments on real-world data sets show that our algorithm is very effective.