A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Machine Learning
Normalized Cuts and Image Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Computation
Randomized Rounding: A Technique for Provably Good Algorithms and
Randomized Rounding: A Technique for Provably Good Algorithms and
On the algorithmic implementation of multiclass kernel-based vector machines
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Generalization error bounds for Bayesian mixture algorithms
The Journal of Machine Learning Research
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
K-means clustering via principal component analysis
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
How slow is the k-means method?
Proceedings of the twenty-second annual symposium on Computational geometry
Fast SDP Relaxations of Graph Cut Clustering, Transduction, and Other Combinatorial Problems
The Journal of Machine Learning Research
A dependence maximization view of clustering
Proceedings of the 24th international conference on Machine learning
Maximum margin clustering made practical
Proceedings of the 24th international conference on Machine learning
A tutorial on spectral clustering
Statistics and Computing
Efficient multiclass maximum margin clustering
Proceedings of the 25th international conference on Machine learning
Large margin vs. large volume in transductive learning
Machine Learning
Unsupervised and semi-supervised multi-class support vector machines
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Transductive Rademacher complexity and its applications
Journal of Artificial Intelligence Research
Stability of k-means clustering
COLT'07 Proceedings of the 20th annual conference on Learning theory
Linear time maximum margin clustering
IEEE Transactions on Neural Networks
Rademacher penalties and structural risk minimization
IEEE Transactions on Information Theory
Mercer kernel-based clustering in feature space
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The large volume principle proposed by Vladimir Vapnik, which advocates that hypotheses lying in an equivalence class with a larger volume are more preferable, is a useful alternative to the large margin principle. In this paper, we introduce a new discriminative clustering model based on the large volume principle called maximum volume clustering (MVC), and then propose two approximation schemes to solve this MVC model: A soft-label MVC method using sequential quadratic programming and a hard-label MVC method using semi-definite programming, respectively. The proposed MVC is theoretically advantageous for three reasons. The optimization involved in hard-label MVC is convex, and under mild conditions, the optimization involved in soft-label MVC is akin to a convex one in terms of the resulting clusters. Secondly, the soft-label MVC method possesses a clustering error bound. Thirdly, MVC includes the optimization problems of a spectral clustering, two relaxed k-means clustering and an information-maximization clustering as special limit cases when its regularization parameter goes to infinity. Experiments on several artificial and benchmark data sets demonstrate that the proposed MVC compares favorably with state-of-the-art clustering methods.