NTPC: N-fold templated piped correction

  • Authors:
  • Dekai Wu;Grace Ngai;Marine Carpuat

  • Affiliations:
  • HKUST, Human Language Technology Center, Dept. of Computer Science, University of Science and Technology, Hong Kong;Dept. of Computing, Hong Kong Polytechnic University, Kowloon, Hong Kong;HKUST, Human Language Technology Center, Dept. of Computer Science, University of Science and Technology, Hong Kong

  • Venue:
  • IJCNLP'04 Proceedings of the First international joint conference on Natural Language Processing
  • Year:
  • 2004

Quantified Score

Hi-index 0.01

Visualization

Abstract

We describe a broadly-applicable conservative error correcting model, N-fold Templated Piped Correction or NTPC (“nitpick”), that consistently improves the accuracy of existing high-accuracy base models. Under circumstances where most obvious approaches actually reduce accuracy more than they improve it, NTPC nevertheless comes with little risk of accidentally degrading performance. NTPC is particularly well suited for natural language applications involving high-dimensional feature spaces, such as bracketing and disambiguation tasks, since its easily customizable template-driven learner allows efficient search over the kind of complex feature combinations that have typically eluded the base models. We show empirically that NTPC yields small but consistent accuracy gains on top of even high-performing models like boosting. We also give evidence that the various extreme design parameters in NTPC are indeed necessary for the intended operating range, even though they diverge from usual practice.