Machine Learning - Special issue on inductive transfer
Convex Optimization
Learning to Decode Cognitive States from Brain Images
Machine Learning
Learning with matrix factorizations
Learning with matrix factorizations
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Closed-form supervised dimensionality reduction with generalized linear models
Proceedings of the 25th international conference on Machine learning
Relational learning via collective matrix factorization
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
A Unified View of Matrix Factorization Models
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
Supervised learning of local projection kernels
Neurocomputing
Sparse regression models of pain perception
BI'10 Proceedings of the 2010 international conference on Brain informatics
Distinguishing cognitive states using iterative classification
Proceedings of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing
Pattern Recognition Letters
Hi-index | 0.00 |
In machine learning problems with tens of thousands of features and only dozens or hundreds of independent training examples, dimensionality reduction is essential for good learning performance. In previous work, many researchers have treated the learning problem in two separate phases: first use an algorithm such as singular value decomposition to reduce the dimensionality of the data set, and then use a classification algorithm such as naïve Bayes or support vector machines to learn a classifier. We demonstrate that it is possible to combine the two goals of dimensionality reduction and classification into a single learning objective, and present a novel and efficient algorithm which optimizes this objective directly. We present experimental results in fMRI analysis which show that we can achieve better learning performance and lower-dimensional representations than two-phase approaches can.