Latent variable models and factors analysis
Latent variable models and factors analysis
GTM: the generative topographic mapping
Neural Computation
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Variational Relevance Vector Machines
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Local distance preservation in the GP-LVM through back constraints
ICML '06 Proceedings of the 23rd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Probabilistic Non-linear Principal Component Analysis with Gaussian Process Latent Variable Models
The Journal of Machine Learning Research
A Unifying View of Sparse Approximate Gaussian Process Regression
The Journal of Machine Learning Research
A Direct Method for Building Sparse Kernel Learning Algorithms
The Journal of Machine Learning Research
Twin Kernel Embedding with Back Constraints
ICDMW '07 Proceedings of the Seventh IEEE International Conference on Data Mining Workshops
IEEE Transactions on Pattern Analysis and Machine Intelligence
Construction of tunable radial basis function networks using orthogonal forward selection
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
A new dimensionality reduction method, called relevance units latent variable model (RULVM), is proposed in this paper. RULVM has a close link with the framework of Gaussian process latent variable model (GPLVM) and it originates from a recently developed sparse kernel model called relevance units machine (RUM). RUM follows the idea of relevance vector machine (RVM) under the Bayesian framework but releases the constraint that relevance vectors (RVs) have to be selected from the input vectors. RUM treats relevance units (RUs) as part of the parameters to be learned from the data. As a result, a RUM maintains all the advantages of RVM and offers superior sparsity. RULVM inherits the advantages of sparseness offered by the RUM and the experimental result shows thatRULVMalgorithm possesses considerable computational advantages over GPLVM algorithm.