Interactive error repair for an online handwriting interface
CHI 98 Cconference Summary on Human Factors in Computing Systems
Collaboration using multiple PDAs connected to a PC
CSCW '98 Proceedings of the 1998 ACM conference on Computer supported cooperative work
Model-based and empirical evaluation of multimodal interactive error correction
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
On-Line and Off-Line Handwriting Recognition: A Comprehensive Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multimodal error correction for speech user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
ACM Transactions on Computer-Human Interaction (TOCHI)
Writer Adaptation for Online Handwriting Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Writer Adaptation for Online Handwriting Recognition
Proceedings of the 23rd DAGM-Symposium on Pattern Recognition
Writing Speed Normalization for On-Line Handwritten Text Recognition
ICDAR '05 Proceedings of the Eighth International Conference on Document Analysis and Recognition
Online Handwritten Shape Recognition Using Segmental Hidden Markov Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
In this paper we describe the NPen/sup ++/ system for writer independent on-line handwriting recognition. This recognizer needs no training for a particular writer and can recognize any common writing style (cursive, hand-printed, or a mixture of both). The neural network architecture, which was originally proposed for continuous speech recognition tasks, and the preprocessing techniques of NPen/sup ++/ are designed to make heavy use of the dynamic writing information, i.e. the temporal sequence of data points recorded on an LCD tablet or digitizer. We present results for the writer independent recognition of isolated words. Tested on different dictionary sizes from 1,000 up to 100,000 words, recognition rates range from 98.0% for the 1,000 word dictionary to 91.4% on a 20,000 word dictionary and 82.9% for the 100,000 word dictionary. No language models are used to achieve these results.