The nature of statistical learning theory
The nature of statistical learning theory
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recommendation as classification: using social and content-based information in recommendation
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Kernel principal component analysis
Advances in kernel methods
Kernel partial least squares regression in reproducing kernel hilbert space
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Dimensionality Reduction for Supervised Learning with Reproducing Kernel Hilbert Spaces
The Journal of Machine Learning Research
RCV1: A New Benchmark Collection for Text Categorization Research
The Journal of Machine Learning Research
A least squares formulation for canonical correlation analysis
Proceedings of the 25th international conference on Machine learning
Hypergraph spectral learning for multi-label classification
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Supervised projection approach for boosting classifiers
Pattern Recognition
Multilabel dimensionality reduction via dependence maximization
ACM Transactions on Knowledge Discovery from Data (TKDD)
Fraud Detection: Methods of Analysis for Hypergraph Data
ASONAM '12 Proceedings of the 2012 International Conference on Advances in Social Networks Analysis and Mining (ASONAM 2012)
Multi-user web service selection based on multi-QoS prediction
Information Systems Frontiers
Hi-index | 0.00 |
Dimensionality reduction by feature projection is widely used in pattern recognition, information retrieval, and statistics. When there are some outputs available (e.g., regression values or classification results), it is often beneficial to consider supervised projection, which is based not only on the inputs, but also on the target values. While this applies to a single-output setting, we are more interested in applications with multiple outputs, where several tasks need to be learned simultaneously. In this paper, we introduce a novel projection approach called Multi-Output Regularized feature Projection (MORP), which preserves the information of input features and, meanwhile, captures the correlations between inputs/outputs and (if applicable) between multiple outputs. This is done by introducing a latent variable model on the joint input-output space and minimizing the reconstruction errors for both inputs and outputs. It turns out that the mappings can be found by solving a generalized eigenvalue problem and are ready to extend to nonlinear mappings. Prediction accuracy can be greatly improved by using the new features since the structure of outputs is explored. We validate our approach in two applications. In the first setting, we predict users' preferences for a set of paintings. The second is concerned with image and text categorization where each image (or document) may belong to multiple categories. The proposed algorithm produces very encouraging results in both settings.