Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Non-negative Matrix Factorization with Sparseness Constraints
The Journal of Machine Learning Research
Statistical properties of kernel principal component analysis
Machine Learning
Generalization Bounds for K-Dimensional Coding Schemes in Hilbert Spaces
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
The minimax distortion redundancy in empirical quantizer design
IEEE Transactions on Information Theory
On the eigenspectrum of the gram matrix and the generalization error of kernel-PCA
IEEE Transactions on Information Theory
Individual convergence rates in empirical vector quantizer design
IEEE Transactions on Information Theory
On the Performance of Clustering in Hilbert Spaces
IEEE Transactions on Information Theory
The Sample Complexity of Dictionary Learning
The Journal of Machine Learning Research
Hi-index | 754.84 |
This paper presents a general coding method where data in a Hilbert space are represented by finite dimensional coding vectors. The method is based on empirical risk minimization within a certain class of linear operators, which map the set of coding vectors to the Hilbert space. Two results bounding the expected reconstruction error of the method are derived, which highlight the role played by the codebook and the class of linear operators. The results are specialized to some cases of practical importance, including K-means clustering, nonnegative matrix factorization and other sparse coding methods.