Machine Learning
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Machine Learning
Active + Semi-supervised Learning = Robust Multi-View Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Ensembles of Learning Machines
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
Ho--Kashyap classifier with generalization control
Pattern Recognition Letters
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Combining Pattern Classifiers: Methods and Algorithms
Combining Pattern Classifiers: Methods and Algorithms
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
A classification paradigm for distributed vertically partitioned data
Neural Computation
SVM vs Regularized Least Squares Classification
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
Fast SVM Training Algorithm with Decomposition on Very Large Data Sets
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA
Pattern Recognition Letters
Learning Effective Image Metrics from Few Pairwise Examples
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Efficient co-regularised least squares regression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
Matrix-pattern-oriented Ho-Kashyap classifier with regularization learning
Pattern Recognition
Fast cross-validation of high-breakdown resampling methods for PCA
Computational Statistics & Data Analysis
New Least Squares Support Vector Machines Based on Matrix Patterns
Neural Processing Letters
Matrix-pattern-oriented least squares support vector classifier with AdaBoost
Pattern Recognition Letters
Analyzing Co-training Style Algorithms
ECML '07 Proceedings of the 18th European conference on Machine Learning
Multi-view kernel machine on single-view data
Neurocomputing
When Semi-supervised Learning Meets Ensemble Learning
MCS '09 Proceedings of the 8th International Workshop on Multiple Classifier Systems
Towards a theoretical framework for ensemble classification
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
A Novel Regularization Learning for Single-View Patterns: Multi-View Discriminative Regularization
Neural Processing Letters
Supervised nonlinear dimensionality reduction for visualization and classification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
An introduction to kernel-based learning algorithms
IEEE Transactions on Neural Networks
A hierarchical multiple-view approach to three-dimensional object recognition
IEEE Transactions on Neural Networks
Inconsistency-based active learning for support vector machines
Pattern Recognition
Combining heterogeneous classifiers for relational databases
Pattern Recognition
Three-fold structured classifier design based on matrix pattern
Pattern Recognition
Multiple rank multi-linear SVM for matrix data classification
Pattern Recognition
Hi-index | 0.01 |
The existing multi-view learning (MVL) learns how to process patterns with multiple information sources. In generalization this MVL is proven to have a significant advantage over the usual single-view learning (SVL). However, in most real-world cases we only have single source patterns to which the existing MVL is unable to be directly applied. This paper aims to develop a new MVL technique for single source patterns. To this end, we first reshape the original vector representation of single source patterns into multiple matrix representations. In doing so, we can change the original architecture of a given base classifier into different sub-ones. Each newly generated sub-classifier can classify the patterns represented with the matrix. Here each sub-classifier is taken as one view of the original base classifier. As a result, a set of sub-classifiers with different views are come into being. Then, one joint rather than separated learning process for the multi-view sub-classifiers is developed. In practice, the original base classifier employs the vector-pattern-oriented Ho-Kashyap classifier with regularization learning (called MHKS) as a paradigm which is not limited to MHKS. Thus, the proposed joint multi-view learning is named as MultiV-MHKS. Finally, the feasibility and effectiveness of the proposed MultiV-MHKS is demonstrated by the experimental results on benchmark data sets. More importantly, we have demonstrated that the proposed multi-view approach generally has a tighter generalization risk bound than its single-view one in terms of the Rademacher complexity analysis.