The vanishing gradient problem during learning recurrent neural nets and problem solutions
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
A neural probabilistic language model
The Journal of Machine Learning Research
A fast learning algorithm for deep belief nets
Neural Computation
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Machine Learning: A Probabilistic Perspective
Machine Learning: A Probabilistic Perspective
Deep neural network language models
WLM '12 Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT
Hi-index | 0.00 |
Though there are many digitalized documents in the Internet, the almost all documents are unlabeled data. Hence, using such numerous unlabeled data, a classifier has to be construct. In pattern recognition research field many researchers pay attention to a deep architecture neural network to achieve the previous aim. The deep architecture neural network is one of semi-supervised learning approaches and achieve high performance in an object recognition task. The network is trained with many unlabeled data and transform input raw features into new features that represent higher concept, for example a human face. In this study I pay attention to feature generation ability of a deep architecture neural network and apply it to natural language processing. Concretely word clustering is developed for sentiment analysis. Experimental results shows clustering performance is good regardless of an unsupervised learning approach.