Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
An Alternate Objective Function for Markovian Fields
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Kernel conditional random fields: representation and clique selection
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Exponential families for conditional random fields
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Accelerated training of conditional random fields with stochastic gradient methods
ICML '06 Proceedings of the 23rd international conference on Machine learning
Semi-supervised conditional random fields for improved sequence segmentation and labeling
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Maximum expected F-measure training of logistic regression models
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Hi-index | 0.00 |
Structured prediction has become very important in recent years. A simple but notable class of structured prediction is one for sequences, so-called sequential labeling. For sequential labeling, it is often required to take a summation over all the possible output sequences, when estimating the parameters of a probabilistic model for instance. We cannot make the direct calculation of such a summation from its definition in practice. Although the ordinary forward-backward algorithm provides an efficient way to do it, it is applicable to limited types of summations. In this paper, we propose a generalization of the forward-backward algorithm, by which we can calculate much broader types of summations than the existing forward-backward algorithms. We show that this generalization subsumes some existing calculations required in past studies, and we also discuss further possibilities of this generalization.