Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Intelligent Data Analysis: An Introduction
Intelligent Data Analysis: An Introduction
An introduction to variable and feature selection
The Journal of Machine Learning Research
Unsupervised word sense disambiguation rivaling supervised methods
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
Introduction to Machine Learning (Adaptive Computation and Machine Learning)
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
In pattern recognition applications with high number of input features and insufficient number of samples, the curse of dimensionality can be overcome by extracting features from smaller feature subsets. The domain knowledge, for example, can be used to group some of the features together, which are also known as "views". The features extracted from views can later be combined (i.e. stacking) to train a final classifier. In this work, we demonstrate that even very simple features such as class-distributions within clusters of each view can serve as such valuable features.