Probabilistic latent semantic indexing
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
The Journal of Machine Learning Research
A choice model with infinitely many latent features
ICML '06 Proceedings of the 23rd international conference on Machine learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Latent Dirichlet learning for document summarization
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Bayesian compressive sensing using Laplace priors
IEEE Transactions on Image Processing
Dirichlet Class Language Models for Speech Recognition
IEEE Transactions on Audio, Speech, and Language Processing
Bayesian Sensing Hidden Markov Models
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
This paper presents a new Bayesian sparse learning approach to select salient lexical features for sparse topic modeling. The Bayesian learning based on latent Dirichlet allocation (LDA) is performed by incorporating the spike-and-slab priors. According to this sparse LDA (sLDA), the spike distribution is used to select salient words while the slab distribution is applied to establish the latent topic model based on those selected relevant words. The variational inference procedure is developed to estimate prior parameters for sLDA. In the experiments on document modeling using LDA and sLDA, we find that the proposed sLDA does not only reduce the model perplexity but also reduce the memory and computation costs. Bayesian feature selection method does effectively identify relevant topic words for building sparse topic model.