Machine Learning
Machine Learning
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Pairwise classification and support vector machines
Advances in kernel methods
Machine Learning
Response Surface Methodology: Process and Product in Optimization Using Designed Experiments
Response Surface Methodology: Process and Product in Optimization Using Designed Experiments
Choosing Multiple Parameters for Support Vector Machines
Machine Learning
Model Selection and Error Estimation
Machine Learning
Ho--Kashyap classifier with generalization control
Pattern Recognition Letters
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Unsupervised word sense disambiguation rivaling supervised methods
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
SVM vs Regularized Least Squares Classification
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 1 - Volume 01
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Optimal kernel selection in Kernel Fisher discriminant analysis
ICML '06 Proceedings of the 23rd international conference on Machine learning
Efficient kernel feature extraction for massive data sets
Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining
Matrix-pattern-oriented Ho-Kashyap classifier with regularization learning
Pattern Recognition
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
Multi-view kernel machine on single-view data
Neurocomputing
Non-sparse Multiple Kernel Learning for Fisher Discriminant Analysis
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
Generalized discriminant analysis: a matrix exponential approach
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Regularizing multiple kernel learning using response surface methodology
Pattern Recognition
A novel multi-view learning developed from single-view patterns
Pattern Recognition
On image matrix based feature extraction algorithms
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Comments on “On Image Matrix Based Feature Extraction Algorithms”
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Learning Similarity With Multikernel Method
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Rademacher penalties and structural risk minimization
IEEE Transactions on Information Theory
Rademacher averages and phase transitions in Glivenko-Cantelli classes
IEEE Transactions on Information Theory
Hi-index | 0.01 |
Multi-view learning was supposed to process data with multiple information sources. Our previous work extended multi-view learning and proposed one effective learning machine named MultiV-MHKS. MultiV-MHKS firstly changed a base classifier into M different sub-classifiers, and then designed one joint learning process for the generated M sub-ones. Each sub-classifier was taken as one view of MultiV-MHKS. However, MultiV-MHKS assumed that each sub-classifier should play an equal role in the ensemble. Thus the weight values r"q, q=1...M for each sub-classifier were set to the equal value. In practice, this hypothesis was neither flexible nor appropriate since r"qs should reflect different effects of their corresponding views. In order to make r"qs flexible and appropriate, in this paper we propose a regularized multi-view learning machine named RMultiV-MHKS with the optimized r"qs. In this case, we optimize r"qs through using the Response Surface Technique (RST) on cross-validation data and thus can obtain a regularized multi-view learning machine. Doing so can assign a certain view with zero weight in the combination, which means that this specific view does not carry discriminative information for the problem and hence can be pruned. The experimental results here validate the effectiveness of the proposed RMultiV-MHKS and meanwhile explore the effect of some important parameters. The characters of the RMultiV-MHKS are: (1) distributing more weight to the favorable views which can reflect the property of the problem; (2) owning a tighter generalization risk bound than its corresponding single-view learning machine in terms of the Rademacher complexity; (3) having a statistically superior classification performance to the original MultiV-MHKS.