Vector quantization and signal compression
Vector quantization and signal compression
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Regularization theory and neural networks architectures
Neural Computation
Training with noise is equivalent to Tikhonov regularization
Neural Computation
Dimension reduction by local principal component analysis
Neural Computation
GTM: the generative topographic mapping
Neural Computation
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
An equivalence between sparse approximation and support vector machines
Neural Computation
Support vector density estimation
Advances in kernel methods
Combining support vector and mathematical programming methods for classification
Advances in kernel methods
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
Prediction with Gaussian processes: from linear regression to linear prediction and beyond
Learning in graphical models
A unifying review of linear Gaussian models
Neural Computation
Learning and Design of Principal Curves
IEEE Transactions on Pattern Analysis and Machine Intelligence
Advances in Large Margin Classifiers
Advances in Large Margin Classifiers
The minimax distortion redundancy in empirical quantizer design
IEEE Transactions on Information Theory
Principal Surfaces from Unsupervised Kernel Regression
IEEE Transactions on Pattern Analysis and Machine Intelligence
Incremental Nonlinear Dimensionality Reduction by Manifold Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning linear PCA with convex semi-definite programming
Pattern Recognition
Learning to Transform Time Series with a Few Examples
IEEE Transactions on Pattern Analysis and Machine Intelligence
Model Selection: Beyond the Bayesian/Frequentist Divide
The Journal of Machine Learning Research
Similarity preserving principal curve: an optimal 1-D feature extractor for data representation
IEEE Transactions on Neural Networks
Some marginal learning algorithms for unsupervised problems
ISI'05 Proceedings of the 2005 IEEE international conference on Intelligence and Security Informatics
Evolutionary kernel density regression
Expert Systems with Applications: An International Journal
On evolutionary approaches to unsupervised nearest neighbor regression
EvoApplications'12 Proceedings of the 2012t European conference on Applications of Evolutionary Computation
A particle swarm embedding algorithm for nonlinear dimensionality reduction
ANTS'12 Proceedings of the 8th international conference on Swarm Intelligence
Unsupervised nearest neighbors with kernels
KI'12 Proceedings of the 35th Annual German conference on Advances in Artificial Intelligence
Learning morphological maps of galaxies with unsupervised regression
Expert Systems with Applications: An International Journal
Regularization-free principal curve estimation
The Journal of Machine Learning Research
Hi-index | 0.00 |
Many settings of unsupervised learning can be viewed as quantization problems - the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised learning. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding.We explore this connection in two ways: (1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways; and (2) we derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach.