Cirrin: a word-level unistroke keyboard for pen input
Proceedings of the 11th annual ACM symposium on User interface software and technology
Quikwriting: continuous stylus-based text entry
Proceedings of the 11th annual ACM symposium on User interface software and technology
On-Line and Off-Line Handwriting Recognition: A Comprehensive Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Dasher—a data entry interface using continuous gestures and language models
UIST '00 Proceedings of the 13th annual ACM symposium on User interface software and technology
Model for unistroke writing time
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Unbounded length contexts for PPM
DCC '95 Proceedings of the Conference on Data Compression
Designing for uncertain, asymmetric control: Interaction design for brain-computer interfaces
International Journal of Human-Computer Studies
Head tilting for interaction in mobile contexts
Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services
Developing a motion-based input model for mobile devices
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: interaction platforms and techniques
KeyTilt: un clavier logiciel par inclinaison
Conference Internationale Francophone sur I'Interaction Homme-Machine
Developing a phoneme-based talking joystick for nonspeaking individuals
ACM SIGACCESS Accessibility and Computing
Hi-index | 0.00 |
We present a gestural interface for entering text on a mobile device via continuous movements, with control based on feedback from a probabilistic language model. Text is represented by continuous trajectories over a hexagonal tessellation, and entry becomes a manual control task. The language model is used to infer user intentions and provide predictions about future actions, and the local dynamics adapt to reduce effort in entering probable text. This leads to an interface with a stable layout, aiding user learning, but which appropriately supports the user via the probability model. Experimental results demonstrate that the application of this technique reduces variance in gesture trajectories, and is competitive in terms of throughput for mobile devices. This paper provides a practical example of a user interface making uncertainty explicit to the user, and probabilistic feedback from hypothesised goals has general application in many gestural interfaces, and is well-suited to support multimodal interaction.