A Combined Latent Class and Trait Model for the Analysis and Visualization of Discrete Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Parallel Fiber Coding in the Cerebellum for Life-Long Learning
Autonomous Robots
Feature extraction by non parametric mutual information maximization
The Journal of Machine Learning Research
Orientation distance-based discriminative feature extraction for multi-class classification
CIKM '10 Proceedings of the 19th ACM international conference on Information and knowledge management
A minimax probabilistic approach to feature transformation for multi-class data
Applied Soft Computing
Hi-index | 0.00 |
This paper presents the derivation of an unsupervised learning algorithm, which enables the identification and visualization of latent structure within ensembles of high-dimensional data. This provides a linear projection of the data onto a lower dimensional subspace to identify the characteristic structure of the observations independent latent causes. The algorithm is shown to be a very promising tool for unsupervised exploratory data analysis and data visualization. Experimental results confirm the attractiveness of this technique for exploratory data analysis and an empirical comparison is made with the recently proposed generative topographic mapping (GTM) and standard principal component analysis (PCA). Based on standard probability density models a generic nonlinearity is developed which allows both (1) identification and visualization of dichotomised clusters inherent in the observed data and (2) separation of sources with arbitrary distributions from mixtures, whose dimensionality may be greater than that of number of sources. The resulting algorithm is therefore also a generalized neural approach to independent component analysis (ICA) and it is considered to be a promising method for analysis of real-world data that will consist of sub- and super-Gaussian components such as biomedical signals