Variable Length Language Model for Chinese Character Recognition

  • Authors:
  • Sheng Zhang;Xianli Wu

  • Affiliations:
  • -;-

  • Venue:
  • ICMI '00 Proceedings of the Third International Conference on Advances in Multimodal Interfaces
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a new type of language model -- variable length language model (VLLM) whose length is non-deterministic on the base of 5-gram combined model brought forward previously. Its main advantage lies in that it captures the function of 5-gram combined model and reflects the structural feature of every line in test text as well. Compared to previous language model, the VLLM makes use of current result to determine which kind of language model should be used next and realizes the automatic choice of language model that is always constant before. VLLM also resolves the problem when punctuation marks appear. Based on those improvements, we make experiments and get encouraging result.