A marking based interface for collaborative writing
UIST '93 Proceedings of the 6th annual ACM symposium on User interface software and technology
User acceptance of handwritten recognition accuracy
CHI '94 Conference Companion on Human Factors in Computing Systems
Principles of mixed-initiative user interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
CueTIP: a mixed-initiative interface for correcting handwriting errors
UIST '06 Proceedings of the 19th annual ACM symposium on User interface software and technology
Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes
Proceedings of the 20th annual ACM symposium on User interface software and technology
Interactive multimodal transcription of text images using a web-based demo system
Proceedings of the 14th international conference on Intelligent user interfaces
Protractor: a fast and accurate gesture recognizer
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
An interactive machine translation system with online learning
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations
Using scribble gestures to enhance editing behaviors of sketch recognition systems
CHI '12 Extended Abstracts on Human Factors in Computing Systems
Gestures as point clouds: a $P recognizer for user interface prototypes
Proceedings of the 14th ACM international conference on Multimodal interaction
Hi-index | 0.00 |
We present a straightforward solution to incorporate text-editing gestures to mixed-initiative user interfaces (MIUIs). Our approach provides (1) disambiguation from handwritten text, (2) edition context, (3) virtually perfect accuracy, and (4) a trivial implementation. An evaluation study with 32 e-pen users showed that our approach is suitable to production-ready environments. In addition, performance tests on a desktop PC and on a mobile device revealed that gestures are really fast to recognize (0.1 ms on average). Taken together, these results suggest that our approach can help developers to deploy simple but effective, high-performance text-editing gestures.