ACM Computing Surveys (CSUR)
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Why so many clustering algorithms: a position paper
ACM SIGKDD Explorations Newsletter
Feature Weighting in k-Means Clustering
Machine Learning
Cluster ensembles --- a knowledge reuse framework for combining multiple partitions
The Journal of Machine Learning Research
Convex Optimization
A probabilistic framework for semi-supervised clustering
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Learning to rank using gradient descent
ICML '05 Proceedings of the 22nd international conference on Machine learning
Semi-supervised graph clustering: a kernel approach
ICML '05 Proceedings of the 22nd international conference on Machine learning
Clustering with Bregman Divergences
The Journal of Machine Learning Research
Locally adaptive metrics for clustering high dimensional data
Data Mining and Knowledge Discovery
Kernel regression with order preferences
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 1
On Defining Partition Entropy by Inequalities
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We propose a clustering algorithm that effectively utilizes feature order preferences, which have the form that feature s is more important than feature t . Our clustering formulation aims to incorporate feature order preferences into prototype-based clustering. The derived algorithm automatically learns distortion measures parameterized by feature weights which will respect the feature order preferences as much as possible. Our method allows the use of a broad range of distortion measures such as Bregman divergences. Moreover, even when generalized entropy is used in the regularization term, the subproblem of learning the feature weights is still a convex programming problem. Empirical results demonstrate the effectiveness and potential of our method.