Probabilistic latent semantic indexing
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
SIMPLIcity: Semantics-Sensitive Integrated Matching for Picture LIbraries
IEEE Transactions on Pattern Analysis and Machine Intelligence
Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach
IEEE Transactions on Pattern Analysis and Machine Intelligence
Probabilistic author-topic models for information discovery
Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining
Modeling word burstiness using the Dirichlet distribution
ICML '05 Proceedings of the 22nd international conference on Machine learning
Employing Latent Dirichlet Allocation for fraud detection in telecommunications
Pattern Recognition Letters
ICDMW '07 Proceedings of the Seventh IEEE International Conference on Data Mining Workshops
Fast collapsed gibbs sampling for latent dirichlet allocation
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Bayesian Multi-topic Microarray Analysis with Hyperparameter Reestimation
ADMA '09 Proceedings of the 5th International Conference on Advanced Data Mining and Applications
MPI/OpenMP hybrid parallel inference for Latent Dirichlet Allocation
Proceedings of the Third Workshop on Large Scale Data Mining: Theory and Applications
Accelerating text mining workloads in a MapReduce-based distributed GPU environment
Journal of Parallel and Distributed Computing
Hi-index | 0.00 |
In this paper, we propose an acceleration of collapsed variational Bayesian (CVB) inference for latent Dirichlet allocation (LDA) by using Nvidia CUDA compatible devices. While LDA is an efficient Bayesian multi-topic document model, it requires complicated computations for parameter estimation in comparison with other simpler document models, e.g. probabilistic latent semantic indexing, etc. Therefore, we accelerate CVB inference, an efficient deterministic inference method for LDA, with Nvidia CUDA. In the evaluation experiments, we used a set of 50,000 documents and a set of 10,000 images. We could obtain inference results comparable to sequential CVB inference.