Machine Learning
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Pairwise classification and support vector machines
Advances in kernel methods
Analyzing the effectiveness and applicability of co-training
Proceedings of the ninth international conference on Information and knowledge management
Model Selection and Error Estimation
Machine Learning
Active + Semi-supervised Learning = Robust Multi-View Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
A classification paradigm for distributed vertically partitioned data
Neural Computation
ICTAI '04 Proceedings of the 16th IEEE International Conference on Tools with Artificial Intelligence
Efficient kernel feature extraction for massive data sets
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Analyzing Co-training Style Algorithms
ECML '07 Proceedings of the 18th European conference on Machine Learning
Towards a theoretical framework for ensemble classification
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Feature-Correlation based multi-view detection
ICCSA'05 Proceedings of the 2005 international conference on Computational Science and Its Applications - Volume Part IV
Rademacher averages and phase transitions in Glivenko-Cantelli classes
IEEE Transactions on Information Theory
Reduced Support Vector Machines: A Statistical Theory
IEEE Transactions on Neural Networks
Hi-index | 12.05 |
The existing multi-view learning (MVL) is learning from patterns with multiple information sources and has been proven its superior generalization to the conventional single-view learning (SVL). However, in most real-world cases, researchers just have single source patterns available in which the existing MVL is uneasily directly applied. The purpose of this paper is to solve this problem and develop a novel kernel-based MVL technique for single source patterns. In practice, we first generate different Nystrom approximation matrices K"ps for the gram matrix G of the given single source patterns. Then, we regard the learning on each generated Nystrom approximation matrix K"p as one view. Finally, different views on K"ps are synthesized into a novel multi-view classifier. In doing so, the proposed algorithm as a MVL machine can directly work on single source patterns and simultaneously achieve: (1) low-cost learning; (2) effectiveness; (3) the same Rademacher complexity as the single-view KMHKS; (4) ease of extension to any other kernel-based learning algorithms.