Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Machine Learning
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Active + Semi-supervised Learning = Robust Multi-View Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
MARK: a boosting algorithm for heterogeneous kernel models
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
On different facets of regularization theory
Neural Computation
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Column-generation boosting methods for mixture of kernels
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
SVM vs Regularized Least Squares Classification
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
ICTAI '04 Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence
Learning Multiple Tasks with Kernel Methods
The Journal of Machine Learning Research
Learning the Kernel with Hyperkernels
The Journal of Machine Learning Research
A statistical framework for genomic data fusion
Bioinformatics
Efficient kernel feature extraction for massive data sets
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
More efficiency in multiple kernel learning
Proceedings of the 24th international conference on Machine learning
MultiK-MHKS: A Novel Multiple Kernel Learning Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminatively regularized least-squares classification
Pattern Recognition
Feature-Correlation based multi-view detection
ICCSA'05 Proceedings of the 2005 international conference on Computational Science and Its Applications - Volume Part IV
Regularization parameter estimation for feedforward neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Face Recognition by Regularized Discriminant Analysis
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Input space versus feature space in kernel-based methods
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Efficient and robust feature extraction by maximum margin criterion
IEEE Transactions on Neural Networks
A novel multi-view learning developed from single-view patterns
Pattern Recognition
Incorporation of a Regularization Term to Control Negative Correlation in Mixture of Experts
Neural Processing Letters
Hi-index | 0.00 |
The existing Multi-View Learning (MVL) is to discuss how to learn from patterns with multiple information sources and has been proven its superior generalization to the usual Single-View Learning (SVL). However, in most real-world cases there are just single source patterns available such that the existing MVL cannot work. The purpose of this paper is to develop a new multi-view regularization learning for single source patterns. Concretely, for the given single source patterns, we first map them into M feature spaces by M different empirical kernels, then associate each generated feature space with our previous proposed Discriminative Regularization (DR), and finally synthesize M DRs into one single learning process so as to get a new Multi-view Discriminative Regularization (MVDR), where each DR can be taken as one view of the proposed MVDR. The proposed method achieves: (1) the complementarity for multiple views generated from single source patterns; (2) an analytic solution for classification; (3) a direct optimization formulation for multi-class problems without one-against-all or one-against-one strategies.