Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Early versus late fusion in semantic video analysis
Proceedings of the 13th annual ACM international conference on Multimedia
Early versus late fusion in semantic video analysis
Proceedings of the 13th annual ACM international conference on Multimedia
An RKHS for multi-view learning and manifold co-regularization
Proceedings of the 25th international conference on Machine learning
Information Fusion in Multimedia Information Retrieval
Adaptive Multimedial Retrieval: Retrieval, User, and Semantics
Natural computing methods in bioinformatics: A survey
Information Fusion
PAC-Bayesian learning of linear classifiers
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Some Remarks on Chosen Methods of Classifier Fusion Based on Weighted Voting
HAIS '09 Proceedings of the 4th International Conference on Hybrid Artificial Intelligence Systems
Boosting Classifiers Built from Different Subsets of Features
Fundamenta Informaticae
ALT'09 Proceedings of the 20th international conference on Algorithmic learning theory
Multimodal information fusion application to human emotion recognition from face and speech
Multimedia Tools and Applications
Robust multi-view boosting with priors
ECCV'10 Proceedings of the 11th European conference on computer vision conference on Computer vision: Part III
Multi-view learning via probabilistic latent semantic analysis
Information Sciences: an International Journal
Hi-index | 0.00 |
In many fields, such as bioinformatics or multimedia, data may be described using different sets of features (or views) which carry either global or local information. Some learning tasks make use of these several views in order to improve overall predictive power of classifiers through fusion-based methods. Usually, these approaches rely on a weighted combination of classifiers (or selected descriptions), where classifiers are learned independently. One drawback of these methods is that the classifier learned on one view does not communicate its failures within the other views. This paper deals with a novel approach to integrate multiview information. The proposed algorithm, named Mumbo, is based on boosting. Within the boosting scheme, Mumbo maintains one distribution of examples on each view, and at each round, it learns one weak classifier on each view. Within a view, the distribution of examples evolves both with the ability of the dedicated classifier to deal with examples of the corresponding features space, and with the ability of classifiers in other views to process the same examples within their own description spaces. Hence, the principle is to slightly remove the hard examples from the learning space of one view, while their weights get higher in the other views. This way, we expect that examples are urged to be processed by the most appropriate views, when possible. At the end of the iterative learning process, a final classifier is computed by a weighted combination of selected weak classifiers. This paper provides the Mumbo algorithm in a multiclass and multiview setting, based on recent theoretical advances in boosting. The boosting properties of Mumbo are proved, as well as some results on its generalization capabilities. Several experimental results are reported which point out that complementary views may actually cooperate under some assumptions.