Learning with lookahead: can history-based models rival globally optimized models?

  • Authors:
  • Yoshimasa Tsuruoka;Yusuke Miyao;Jun'ichi Kazama

  • Affiliations:
  • Japan Advanced Institute of Science and Technology (JAIST), Japan and National Institute of Information and Communications Technology (NICT), Japan;National Institute of Informatics (NII), Japan and National Institute of Information and Communications Technology (NICT), Japan;National Institute of Information and Communications Technology (NICT), Japan

  • Venue:
  • CoNLL '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper shows that the performance of history-based models can be significantly improved by performing lookahead in the state space when making each classification decision. Instead of simply using the best action output by the classifier, we determine the best action by looking into possible sequences of future actions and evaluating the final states realized by those action sequences. We present a perceptron-based parameter optimization method for this learning framework and show its convergence properties. The proposed framework is evaluated on part-of-speech tagging, chunking, named entity recognition and dependency parsing, using standard data sets and features. Experimental results demonstrate that history-based models with lookahead are as competitive as globally optimized models including conditional random fields (CRFs) and structured perceptrons.