Using Spin Images for Efficient Object Recognition in Cluttered 3D Scenes
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Journal of Machine Learning Research
Convex Optimization
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ICML '06 Proceedings of the 23rd international conference on Machine learning
Scalable training of L1-regularized log-linear models
Proceedings of the 24th international conference on Machine learning
Self-taught learning: transfer learning from unlabeled data
Proceedings of the 24th international conference on Machine learning
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
A quasi-Newton approach to non-smooth convex optimization
Proceedings of the 25th international conference on Machine learning
EfficientL1regularized logistic regression
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
IEEE Transactions on Neural Networks
Like like alike: joint friendship and interest propagation in social networks
Proceedings of the 20th international conference on World wide web
Conditional topical coding: an efficient topic model conditioned on rich features
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Self-taught dimensionality reduction on the high-dimensional small-sized data
Pattern Recognition
Semi-supervised multi-label classification: a simultaneous large-margin, subspace learning approach
ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part II
Group sparse topical coding: from code to topic
Proceedings of the sixth ACM international conference on Web search and data mining
Co-factorization machines: modeling user interests and predicting individual decisions in Twitter
Proceedings of the sixth ACM international conference on Web search and data mining
Sparse hashing for fast multimedia search
ACM Transactions on Information Systems (TOIS)
Proceedings of the 22nd international conference on World Wide Web
Personal and Ubiquitous Computing
Hi-index | 0.00 |
Sparse coding is an unsupervised learning algorithm for finding concise, slightly higher-level representations of inputs, and has been successfully applied to self-taught learning, where the goal is to use unlabeled data to help on a supervised learning task, even if the unlabeled data cannot be associated with the labels of the supervised task [Raina et al., 2007]. However, sparse coding uses a Gaussian noise model and a quadratic loss function, and thus performs poorly if applied to binary valued, integer valued, or other non-Gaussian data, such as text. Drawing on ideas from generalized linear models (GLMs), we present a generalization of sparse coding to learning with data drawn from any exponential family distribution (such as Bernoulli, Poisson, etc). This gives a method that we argue is much better suited to model other data types than Gaussian. We present an algorithm for solving the L1- regularized optimization problem defined by this model, and show that it is especially efficient when the optimal solution is sparse. We also show that the new model results in significantly improved self-taught learning performance when applied to text classification and to a robotic perception task.