Vector quantization and signal compression
Vector quantization and signal compression
Empirical Risk Approximation: An Induction Principle for Unsupervised Learning
Empirical Risk Approximation: An Induction Principle for Unsupervised Learning
Clustering with Bregman Divergences
The Journal of Machine Learning Research
Stability Properties of Empirical Risk Minimization over Donsker Classes
The Journal of Machine Learning Research
K-hyperline clustering learning for sparse component analysis
Signal Processing
A sober look at clustering stability
COLT'06 Proceedings of the 19th annual conference on Learning Theory
-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation
IEEE Transactions on Signal Processing
On the optimality of conditional expectation as a Bregman predictor
IEEE Transactions on Information Theory
Mixing matrix estimation using discriminative clustering for blind source separation
Digital Signal Processing
A fast mixing matrix estimation method in the wavelet domain
Signal Processing
Hi-index | 0.10 |
K-hyperline clustering is an iterative algorithm based on singular value decomposition and it has been successfully used in sparse component analysis. In this paper, we prove that the algorithm converges to a locally optimal solution for a given set of training data, based on Lloyd's optimality conditions. Furthermore, the local optimality is shown by developing an Expectation-Maximization procedure for learning dictionaries to be used in sparse representations and by deriving the clustering algorithm as its special case. The cluster centroids obtained from the algorithm are proved to tessellate the space into convex Voronoi regions. The stability of clustering is shown by posing the problem as an empirical risk minimization procedure over a function class. It is proved that, under certain conditions, the cluster centroids learned from two sets of i.i.d. training samples drawn from the same probability space become arbitrarily close to each other, as the number of training samples increase asymptotically.