The nature of statistical learning theory
The nature of statistical learning theory
Combining labeled and unlabeled data with co-training
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
An empirical comparison of four initialization methods for the K-Means algorithm
Pattern Recognition Letters
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Refining Initial Points for K-Means Clustering
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Cluster ensembles --- a knowledge reuse framework for combining multiple partitions
The Journal of Machine Learning Research
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Clustering Ensembles: Models of Consensus and Weak Partitions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Estimation of Dependences Based on Empirical Data: Empirical Inference Science (Information Science and Statistics)
Artificial Neural Networks: An Introduction (SPIE Tutorial Texts in Optical Engineering, Vol. TT68)
Artificial Neural Networks: An Introduction (SPIE Tutorial Texts in Optical Engineering, Vol. TT68)
ACM Transactions on Knowledge Discovery from Data (TKDD)
A multiview approach for intelligent data analysis based on data operators
Information Sciences: an International Journal
Collaborative clustering with background knowledge
Data & Knowledge Engineering
SVM+ regression and multi-task learning
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Guest Editorial: Learning from multiple sources
Machine Learning
Multi-view kernel construction
Machine Learning
A theory of learning from different domains
Machine Learning
Clustering of the self-organizing map
IEEE Transactions on Neural Networks
Incremental hierarchical text clustering with privileged information
Proceedings of the 2013 ACM symposium on Document engineering
Information Sciences: an International Journal
Hi-index | 0.07 |
Many machine learning algorithms assume that all input samples are independently and identically distributed from some common distribution on either the input space X, in the case of unsupervised learning, or the input and output space XxY in the case of supervised and semi-supervised learning. In the last number of years the relaxation of this assumption has been explored and the importance of incorporation of additional information within machine learning algorithms became more apparent. Traditionally such fusion of information was the domain of semi-supervised learning. More recently the inclusion of knowledge from separate hypothetical spaces has been proposed by Vapnik as part of the supervised setting. In this work we are interested in exploring Vapnik's idea of 'master-class' learning and the associated learning using 'privileged' information, however within the unsupervised setting. Adoption of the advanced supervised learning paradigm for the unsupervised setting instigates investigation into the difference between privileged and technical data. By means of our proposed aRi-MAX method stability of the K-Means algorithm is improved and identification of the best clustering solution is achieved on an artificial dataset. Subsequently an information theoretic dot product based algorithm called P-Dot is proposed. This method has the ability to utilize a wide variety of clustering techniques, individually or in combination, while fusing privileged and technical data for improved clustering. Application of the P-Dot method to the task of digit recognition confirms our findings in a real-world scenario.