Hidden Markov models, maximum mutual information estimation, and the speech recognition problem
Hidden Markov models, maximum mutual information estimation, and the speech recognition problem
Maximum conditional likelihood via bound maximization and the CEM algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
Variational Extensions to EM and Multinomial PCA
ECML '02 Proceedings of the 13th European Conference on Machine Learning
Discriminative, generative and imitative learning
Discriminative, generative and imitative learning
Convex Optimization
On Discriminative Parameter Learning of Bayesian Network Classifiers
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Orientation modulation for data hiding in clustered-dot halftone prints
IEEE Transactions on Image Processing
On discriminative joint density modeling
ECML'05 Proceedings of the 16th European conference on Machine Learning
Cognitive Systems Research
Hi-index | 0.00 |
We introduce an expectation maximization-type (EM) algorithm for maximum likelihood optimization of conditional densities. It is applicable to hidden variable models where the distributions are from the exponential family. The algorithm can alternatively be viewed as automatic step size selection for gradient ascent, where the amount of computation is traded off to guarantees that each step increases the likelihood. The tradeoff makes the algorithm computationally more feasible than the earlier conditional EM. The method gives a theoretical basis for extended Baum Welch algorithms used in discriminative hidden Markov models in speech recognition, and compares favourably with the current best method in the experiments.