Natural gradient works efficiently in learning
Neural Computation
EM algorithms for PCA and SPCA
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Relative Loss Bounds for Multidimensional Regression Problems
Machine Learning
Closed-form supervised dimensionality reduction with generalized linear models
Proceedings of the 25th international conference on Machine learning
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Tensor Decompositions and Applications
SIAM Review
Hi-index | 0.00 |
In this paper, we propose a general dimensionality reduction method for data generated from a very broad family of distributions and nonlinear functions based on the generalized linear model, called Generalized Linear Principal Component Analysis (GLPCA). Data of different domains often have very different structures. These data can be modeled by different distributions and reconstruction functions. For example, real valued data can be modeled by the Gaussian distribution with a linear reconstruction function, whereas binary valued data may be more appropriately modeled by the Bernoulli distribution with a logit or probit function. Based on general linear models, we propose a unified framework for extracting features from data of different domains. A general optimization algorithm based on natural gradient ascent on distribution manifold is proposed for obtaining the maximum likelihood solutions. We also present some specific algorithms derived from this framework to deal with specific data modeling problems such as document modeling. Experimental results of these algorithms on several data sets are shown for the validation of GLPCA.